Professional Documents
Culture Documents
Nowakowski R., Landman B. Combinatorial Game Theory 2022
Nowakowski R., Landman B. Combinatorial Game Theory 2022
Nathanson,
Jaroslav Nešetřil, and Aaron Robertson (Eds.)
Combinatorial Game Theory
De Gruyter Proceedings in
Mathematics
|
Combinatorial
Game Theory
|
A Special Collection in Honor of Elwyn Berlekamp, John
H. Conway and Richard K. Guy
Edited by
Richard J. Nowakowski, Bruce M. Landman, Florian Luca,
Melvyn B. Nathanson, Jaroslav Nešetřil, and Aaron
Robertson
Mathematics Subject Classification 2010
05A, 05C55, 05C65, 05D, 11A, 11B, 11D, 11K, 11N, 11P, 11Y, 91A46
Editors
Richard J. Nowakowski Melvyn B. Nathanson
Dalhousie University Lehman College (CUNY)
Dept. of Mathematics & Statistics Department of Mathematics
Chase Building 250 Bedford Park Boulevard West
Halifax NS B3H 3J5 Bronx NY 10468
Canada USA
r.nowakowski@dal.ca Melvyn.Nathanson@lehman.cuny.edu
ISBN 978-3-11-075534-3
e-ISBN (PDF) 978-3-11-075541-1
e-ISBN (EPUB) 978-3-11-075549-7
www.degruyter.com
Preface
What is 1 + 1 + 1?
John H. Conway, 1973
Individually, each of Elwyn R. Berlekamp, John H. Conway, and Richard K. Guy have
received much, rightly deserved, praise. Each made lasting contributions to many ar-
eas of mathematics. This volume is dedicated to their work in combinatorial game
theory. It is due to their efforts that combinatorial game theory exists as a subject.
https://doi.org/10.1515/9783110755411-201
VI | Preface
One other person deserves to be mentioned, Louise Guy, Richard’s wife. A gra-
cious lady made every visitor to their house feel welcome. Some people have asked
why the combinatorial game players, Left and Right, are female and male, respectively.
The original reasons have been forgotten, but after Winning Ways appeared, it became
a mark of respect to remember them as Louise and Richard.
illustrate how far the subject has developed. A general approach of impartial misère
games was only started by Plambeck [71]. A. Siegel (a student of Berlekamp), a major
figure developing this theory, pushes this further in Chapter 20. The theory of parti-
zan misère games was only started in 2007 [69]. Whilst playing in the context of all
misère games, Chapter 10 analyzes a specific game. Chapter 16 contains important re-
sults for analyzing misère dead-ending games. In Winning Ways, dots-&-boxes and
top-entails do not fit into the theory, each in a separate way. They are only partially
analyzed and that via ad hoc methods. Chapter 17 finds a normal play extension that
covers both types of games. (The authors think this would have intrigued them but are
not sure if they would have fully approved.)
Chapters 1, 5–9, 12, 15, 18, and 19 either directly extend the theory or consider a
related game to ones given in Winning Ways. As is evidenced by Richard K. Guy’s early
contributions, it is also important to have new sources of games. These are presented
in Chapters 2, 3, 11, 13, and 14.
Serendipity gave Chapter 4. This paper is the foundation of Chapters 1 and 5. It
gives a simple, effect-for-humans, test for when games are numbers. The authors are
sure that Elwyn, John, and Richard would have started it with a rhyming couplet that
everyone would then remember.
Elwyn, John, and Richard gave freely of their time. Many people will remember the
coffee-time and evenings at the MSRI and BIRS Workshops. Each would be at a large
table fully occupied by anyone who wished to be there, discussing and sometimes
solving problems. Students were especially welcome. All combinatorial games work-
shops now follow this inclusive model. A large number of papers originate at these
workshops, have several coauthors, and include students. They shared their time out-
side of conferences and workshops. Many students will remember those offhand mo-
ments, with one or more of them, that often stretched to hours. I was a second-year
undergraduate student when on meeting John, he immediately asked me what was
1 + 1 + 1? Even after I answered “3”, he still took the time to explain the intricacies of
3-player games. (The question is still unanswered.)
Their wit, wisdom, and willingness to play provided people with pleasure. They
will be sorely missed, but their legacy lives on.
Richard J. Nowakowski
VIII | Preface
[16] Elwyn R. Berlekamp. The 4g4g4g4g4 problems and solutions. In Richard J. Nowakowski,
editor, More Games of No Chance, MSRI Publications, volume 42, pages 231–241. Cambridge
University Press, 2002.
[17] Elwyn R. Berlekamp. Four games for Gardner. In D. Wolfe and T. Rodgers, editors, Puzzler’s
Tribute: A Feast for the Mind, pages 383–386. A K Peters, Ltd., Natick, MA, 2002. Honoring
Martin Gardner.
[18] Elwyn R. Berlekamp and K. Scott. Forcing your opponent to stay in control of a loony
dots-and-boxes endgame. In Richard J. Nowakowski, editor, More Games of No Chance, MSRI
Publications, volume 42, pages 317–330. Cambridge University Press, 2002.
[19] Elwyn R. Berlekamp. Idempotents among partisan games. In Richard J. Nowakowski, editor,
More Games of No Chance, MSRI Publications, volume 42, pages 3–23. Cambridge University
Press, 2002.
[20] Elwyn R. Berlekamp, John H. Conway, and Richard K. Guy. Winning Ways for Your Mathematical
Plays. Vol. 2, second edition. A K Peters, Ltd., 2003.
[21] Elwyn R. Berlekamp, John H. Conway, and Richard K. Guy. Winning Ways for Your Mathematical
Plays. Vol. 3, second edition. A K Peters, Ltd., 2003.
[22] Elwyn R. Berlekamp, John H. Conway, and Richard K. Guy. Winning Ways for Your Mathematical
Plays. Vol. 4, second edition. A K Peters, Ltd., 2004.
[23] Elwyn R. Berlekamp. Yellow-brown hackenbush. In Michael H. Albert and Richard J.
Nowakowski, editors, Games of No Chance 3, MSRI, volume 56, pages 413–418. Cambridge
Univ. Press, 2009.
[24] Elwyn R. Berlekamp and Richard M. Low. Entrepreneurial chess. Internat. J. Game Theory,
47(2):379–415, 2018.
[25] Elwyn R. Berlekamp. Temperatures of games and coupons. In Urban Larsson, editor, Games of
No Chance 5, Mathematical Sciences Research Institute Publications, volume 70, pages 21–33.
Cambridge University Press, 2019.
[26] John H. Conway. All numbers great and small. Res. Paper No. 149, Univ. of Calgary Math. Dept.,
1972.
[27] John H. Conway and H. S. M. Coxeter. Triangulated polygons and frieze patterns. Math. Gaz.,
57:87–94; 175–183, 1973.
[28] John H. Conway. On Numbers and Games. Academic Press, 1976.
[29] John H. Conway. All games bright and beautiful. Amer. Math. Monthly, 84(6):417–434, 1977.
[30] John H. Conway. Loopy games. Ann. Discrete Math., 3:55–74, 1978. Advances in graph theory
(Cambridge Combinatorial Conf., Trinity College, Cambridge, 1977).
[31] John H. Conway. A gamut of game theories. Math. Mag., 51(1):5–12, 1978.
[32] John H. Conway and N. J. A. Sloane. Lexicographic codes: error-correcting codes from game
theory. IEEE Trans. Inform. Theory, 32(3):337–348, 1986.
[33] John H. Conway. More ways of combining games. In R. K. Guy, editor, Proc. Symp. Appl. Math.,
Combinatorial Games, volume 43, pages 57–71. Amer. Math. Soc., Providence, RI, 1991.
[34] John H. Conway. Numbers and games. In R. K. Guy, editor, Proc. Symp. Appl. Math.,
Combinatorial Games, volume 43, pages 23–34. Amer. Math. Soc., Providence, RI, 1991.
[35] W. L. Sibert and J. H. Conway. Mathematical Kayles. Internat. J. Game Theory, 20(3):237–246,
1992.
[36] John H. Conway. On numbers and games. In Summer Course 1993: The Real Numbers (Dutch),
CWI Syllabi, volume 35, pages 101–124. Math. Centrum Centrum Wisk. Inform., Amsterdam,
1993.
[37] John H. Conway. The surreals and the reals. Real numbers, generalizations of the reals,
and theories of continua. In Synthese Lib., volume 242, pages 93–103. Kluwer Acad. Publ.,
Dordrecht, 1994.
X | Preface
[38] John H. Conway. The angel problem. In R. J. Nowakowski, editor, Games of No Chance, Proc.
MSRI Workshop on Combinatorial Games, July, 1994, Berkeley, CA, MSRI Publ., volume 29,
pages 3–12. Cambridge University Press, Cambridge, 1996.
[39] John H. Conway. m13 . In Surveys in Combinatorics, London Math. Soc., Lecture Note Ser.,
volume 241, pages 1–11. Cambridge Univ. Press, Cambridge, 1997.
[40] John H. Conway. On Numbers and Games, 2nd edition. A K Peters, Ltd., 2001. First edition
published in 1976 by Academic Press.
[41] John H. Conway. More infinite games. In Richard J. Nowakowski, editor, More Games of No
Chance, MSRI Publications, volume 42, pages 31–36. Cambridge University Press, 2002.
[42] Richard K. Guy and Cedric A. B. Smith. The G-values of various games. Proc. Camb. Phil. Soc.,
52:514–526, 1956.
[43] Richard K. Guy. Twenty questions concerning Conway’s sylver coinage. Amer. Math. Monthly,
83:634–637, 1976.
[44] Richard K. Guy. Games are graphs, indeed they are trees. In Proc. 2nd Carib. Conf. Combin. and
Comput., pages 6–18. Letchworth Press, Barbados, 1977.
[45] Richard K. Guy. Partisan and impartial combinatorial games. In Combinatorics (Proc. Fifth
Hungarian Colloq., Keszthely, 1976), Vol. I, Colloq. Math. Soc. János Bolyai, volume 18, pages
437–461. North-Holland, Amsterdam, 1978.
[46] Richard K. Guy. Partizan and impartial combinatorial games. Colloq. Math. Soc. János Bolyai,
18:437–461, 1978. Proc. 5th Hungar. Conf. Combin. Vol. I (A. Hajnal and V. T. Sós, eds.),
Keszthely, Hungary, 1976, North-Holland.
[47] Richard K. Guy. Partizan games. In Colloques Internationaux C. N. R. No. 260 — Problèmes
Combinatoires et Théorie des Graphes, pages 199–205, 1979.
[48] Richard K. Guy. Anyone for twopins? In D. A. Klarner, editor, The Mathematical Gardner, pages
2–15. Wadsworth Internat., Belmont, CA, 1981.
[49] Richard K. Guy. Graphs and games. In L. W. Beineke and R. J. Wilson, editors, Selected Topics in
Graph Theory, volume 2, pages 269–295. Academic Press, London, 1983.
[50] Richard K. Guy. John isbell’s game of beanstalk and John Conway’s game of beans-don’t-talk.
Math. Mag., 59:259–269, 1986.
[51] Richard K. Guy. Fair Game, COMAP Math. Exploration Series. Arlington, MA, 1989.
[52] Richard K. Guy. Fair Game: How to play impartial combinatorial games. COMAP, Inc., 60 Lowell
Street, Arlington, MA 02174, 1989.
[53] Richard K. Guy, editor. Combinatorial Games, Proceedings of Symposia in Applied
Mathematics, volume 43. American Mathematical Society, Providence, RI, 1991. Lecture notes
prepared for the American Mathematical Society Short Course held in Columbus, Ohio, August
6–7, 1990, AMS Short Course Lecture Notes.
[54] Richard K. Guy. Impartial games. In Combinatorial Games (Columbus, OH, 1990), Proc.
Sympos. Appl. Math., volume 43, pages 35–55. Amer. Math. Soc., Providence, RI, 1991.
[55] Richard K. Guy. Mathematics from fun & fun from mathematics; an informal autobiographical
history of combinatorial games. In J. H. Ewing and F. W. Gehring, editors, Paul Halmos:
Celebrating 50 Years of Mathematics, pages 287–295. Springer Verlag, New York, NY, 1991.
[56] Richard K. Guy. Unsolved problems in combinatorial games. In American Mathematical Society
Proceedings of the Symposium on Applied Mathematics, volume 43, 1991. Check my homepage
for a copy, http://www.gustavus.edu/~wolfe.
[57] Richard K. Guy. What is a Game? In Combinatorial Games, Proceedings of Symposia in Applied
Mathematics, volume 43, 1991.
[58] Richard K. Guy. Combinatorial games. In R. L. Graham, M. Grötschel, and L. Lovász, editors,
Handbook of Combinatorics, volume II, pages 2117–2162. North-Holland, Amsterdam, 1995.
Preface | XI
[59] Richard K. Guy. Impartial games. In R. J. Nowakowski, editor, Games of No Chance, Proc.
MSRI Workshop on Combinatorial Games, July, 1994, Berkeley, CA, MSRI Publ., volume 29,
pages 61–78. Cambridge University Press, Cambridge, 1996. Earlier version in: Combinatorial
Games, Proc. Symp. Appl. Math. (R. K. Guy, ed.), Vol. 43, Amer. Math. Soc., Providence, RI,
1991, pp. 35–55.
[60] Richard K. Guy. Unsolved problems in combinatorial games. In R. J. Nowakowski, editor, Games
of No Chance, MSRI Publ., volume 29, pages 475–491. Cambridge University Press, 1996.
[61] Richard K. Guy. What is a Game? In Richard Nowakowski, editor, Games of No Chance, MSRI
Publ., volume 29, pages 43–60. Cambridge University Press, 1996.
[62] Ian Caines, Carrie Gates, Richard K. Guy, and Richard J. Nowakowski. Unsolved problems:
periods in taking and splitting games. Amer. Math. Monthly, 106:359–361, 1999.
[63] Richard K. Guy. Aviezri Fraenkel and combinatorial games. Elect. J. Combin, 8:#I2, 2001.
[64] Richard K. Guy and Richard J. Nowakowski. Unsolved problems in combinatorial games. In
Richard J. Nowakowski, editor, More Games of No Chance, MSRI Publications, volume 42,
pages 457–473. Cambridge University Press, 2002.
[65] Richard K. Guy and Richard J. Nowakowski. Unsolved problems in combinatorial game theory.
In M. H. Albert and R. J. Nowakowski, editors, Games of No Chance 3, MSRI, pages 465–489.
Cambridge Univ. Press, 2009.
[66] Alex Fink and Richard K. Guy. The number-pad game College Math. J., 38:260–264, 2007.
[67] Charles L. Bouton. Nim, a game with a complete mathematical theory. Annals of Mathematics,
3(2):35–39, 1902.
[68] Patrick M. Grundy. Mathematics and games. Eureka, 2:6–8, 1939.
[69] G. A. Mesdal and Paul Ottaway. Simplification of Partizan Games in misère play. INTEGERS,
7:#G06, 2007.
[70] John Milnor. Sums of positional games. In: H. W. Kuhn and A. W. Tucker, eds. Contributions to
the Theory of Games, Vol. 2, Ann. of Math. Stud., volume 28, pages 291–301. Princeton, 1953.
[71] T. E. Plambeck. Taming the wild in impartial combinatorial games. INTEGERS, 5:#G05, 2005.
[72] Roland P Sprague. Über mathematische Kampfspiele. Tôhoku Math. J., 41:438–444, 1935–36.
Contents
Preface | V
Kyle Burke, Matthew Ferland, Michael Fisher, Valentin Gledel, and Craig
Tennenhouse
The game of blocking pebbles | 17
Alda Carvalho, Melissa A. Huggan, Richard J. Nowakowski, and Carlos Pereira dos
Santos
A note on numbers | 67
Alda Carvalho, Melissa A. Huggan, Richard J. Nowakowski, and Carlos Pereira dos
Santos
Ordinal sums, clockwise hackenbush, and domino shave | 77
L. R. Haff
Playing Bynum’s game cautiously | 201
Yuki Irie
A base-p Sprague–Grundy-type theorem for p-calm subtraction games: Welter’s
game and representations of generalized symmetric groups | 281
Urban Larsson, Rebecca Milley, Richard Nowakowski, Gabriel Renault, and Carlos
Santos
Recursive comparison tests for dicot and dead-ending games under misère
play | 309
James B. Martin
Extended Sprague–Grundy theory for locally finite games, and applications to
random game-trees | 343
Aaron N. Siegel
On the structure of misère impartial games | 389
Anthony Bonato, Melissa A. Huggan, and Richard J. Nowakowski
The game of flipping coins
Abstract: We consider flipping coins, a partizan version of the impartial game turn-
ing turtles, played on lines of coins. We show that the values of this game are num-
bers, and these are found by first applying a reduction, then decomposing the position
into an iterated ordinal sum. This is unusual since moves in the middle of the line do
not eliminate the rest of the line. Moreover, if G is decomposed into lines H and K,
then G = (H : K R ). This is in contrast to hackenbush strings, where G = (H : K).
1 Introduction
In Winning Ways, Volume 3 [3], Berlekamp, Conway, and Guy introduced turning
turtles and considered many variants. Each game involves a finite row of turtles,
either on feet or backs, and a move is to turn one turtle over onto its back, with the
option of flipping a number of other turtles, to the left, each to the opposite of its cur-
rent state (feet or back). The number depends on the rules of the specific game. The
authors moved to playing with coins as playing with turtles is cruel.
These games can be solved using the Sprague–Grundy theory for impartial games
[2], but the structure and strategies of some variants are interesting. The strategy for
moebius (flip up to five coins) played with 18 coins, involves Möbius transformations;
for mogul (flip up to seven coins) on 24 coins, it involves the miracle octad generator
developed by R. Curtis in his work on the Mathieu group M24 and the Leech lattice
[6, 7]; ternups [3] (flip three equally spaced coins) requires ternary expansions; and
turning corners [3], a two-dimensional version where the corners of a rectangle are
flipped, needs nim-multiplication.
We consider a simple partizan version of turning turtles, also played with
coins. We give a complete solution and show that it involves ordinal sums. This is
somewhat surprising since moves in the middle of the line do not eliminate moves at
the end. Compare this with hackenbush strings [2] and domino shave [5].
Acknowledgement: Anthony Bonato was supported by an NSERC Discovery grant. Melissa A. Hug-
gan was supported by an NSERC Postdoctoral Fellowship. The author also thanks the Department of
Mathematics at Ryerson University, which hosted the author while the research took place. Richard J.
Nowakowski was supported by an NSERC Discovery grant.
Anthony Bonato, Department of Mathematics, Ryerson University, Toronto, Ontario, Canada, e-mail:
abonato@ryerson.ca
Melissa A. Huggan, Department of Mathematics and Computer Science, Mount Allison University,
Sackville, New Brunswick, Canada, e-mail: mhuggan@mta.ca
Richard J. Nowakowski, Department of Mathematics and Statistics, Dalhousie University, Halifax,
Nova Scotia, Canada, e-mail: r.nowakowski@dal.ca
https://doi.org/10.1515/9783110755411-001
2 | A. Bonato et al.
We will denote heads by 0 and tails by 1. Our partizan version will be played with
a line of coins, represented by a 0–1 sequence d1 d2 . . . dn , where di ∈ {0, 1}. With this
position, we associate the binary number ∑ni=1 di 2i−1 . Left moves by choosing some pair
of coins di , dj , i < j, where di = dj = 1, and flips them over so that both coins are 0s.
Right also chooses a pair dk , dℓ , k < ℓ, with dk = 0 and dℓ = 1, and flips them over. If j
is the greatest index such that dj = 1, then dk , k > j, will be deleted. For example,
The game eventually ends since the associated binary number decreases with every
move. We call this game flipping coins.
Another way to model flipping coins is to consider tokens on a strip of loca-
tions. Left can remove a pair of tokens, and Right is able to move a token to an open
space to its left. We use the coin flipping model for this game to be consistent with the
literature.
The game is biased to Left. If there are a nonzero even number of 1s in a position,
then Left always has a move; that is, she will win. Left also wins any nontrivial posi-
tion starting with 1. However, there are positions that Right wins. The two-part method
to find the outcomes and values of the remaining positions can be applied to all po-
sitions. First, apply a modification to the position (unless it is all 1s), which reduces
the number of consecutive 1s to at most three. After this reduction, build an iterated
ordinal sum, by successively deleting everything after the third last 1, this deleted po-
sition determines the value of the next term in the ordinal sum. As a consequence,
the original position is a Right win if the position remaining at the end is of the form
0 . . . 01, and the value is given by the ordinal sum.
The necessary background for numbers is in Section 2. Section 3 contains results
about outcomes and also includes our main results. First, we show that the values are
numbers in Theorem 3.2. Next, an algorithm to find the value of a position is presented,
and Theorem 3.3 states that the value given by the algorithm is correct.
The actual analysis is in Section 4. It starts by identifying the best moves for both
players in Theorem 4.2. This leads directly to the core result Lemma 4.5, which shows
that the value of a position is an ordinal sum. The ordinal sum decomposition of G
is found as follows. Let GL be the position after the Left move that removes the right-
most 1s. Let H be the string G \ GL ; that is, the substring eliminated by Left’s move. Let
H R be the result of Right’s best move in H. Now we have that G = GL : H R . In contrast,
the ordinal sums for hackenbush strings and domino shave [5] involve the value of
H not H R .
The proof of Theorem 3.3 is given in Section 4.1. The final section includes a brief
discussion of open problems.
Finally, we pose a question for the reader, which we answer at the end of Sec-
tion 4.1: Who wins 0101011111 + 1101100111 + 0110110110111 and how?
The game of flipping coins | 3
2 Numbers
All the values in this paper are numbers, and this section contains all the necessary
background to make the paper self-contained. For further details, consult [1, 8]. Posi-
tions are written in terms of their options; that is, G = {Gℒ | Gℛ }.
Definition 2.1 ([1, 2, 8]). Let G be a number whose options are numbers, and let GL ,
GR be the Left and Right options of the canonical form of G.
1. If there is an integer k, GL < k < GR , or if either GL or GR does not exist, then G is
the integer, say n, closest to zero that satisfies GL < n < GR .
2. If both GL and GR exist and the previous case does not apply, then G = 2pq , where
q is the least positive integer such that there is an odd integer p satisfying GL <
p
2q
< GR .
The properties of numbers required for this paper are contained in the next three
theorems.
Theorem 2.2 ([1, 2, 8]). Let G be a number whose options are numbers, and let GL and
GR be the Left and Right options of the canonical form of G. If G′ and G′′ are any Left
and Right options, respectively, then
Theorem 2.2 shows that if we know that the string of inequalities holds, then we
need to only consider the unique best move for both players in a number.
We include the following examples to further illustrate these ideas:
(a) 0 = { | } = {−9 | } = {− 21 | 47 };
(b) −2 = { | −1} = {− 52 | − 16 31
};
(c) 1 = {0 | } = {0 | 100};
(d) 21 = {0 | 1} = { 83 | 32
17
}.
G : H = {Gℒ , G : H ℒ | Gℛ , G : H ℛ }.
Intuitively, playing in G eliminates H, but playing in H does not affect G. For ease of
reading, if an ordinal sum is a term in an expression, then we enclose it in brackets.
Note that x : 0 = x = 0 : x since neither player has a move in 0. We demonstrate
how to calculate the values of other positions with the following examples:
(a) 1 : 1 = {1 | } = 2;
4 | A. Bonato et al.
(b) 1 : −1 = {0 | 1} = 21 ;
(c) 1 : 21 = {0, (1 : 0) | (1 : 1)} = {0, 1 | {1 | }} = {1 | 2} = 32 ;
(d) 21 : 1 = {0, ( 21 : 0) | 1} = {0, 21 | 1} = { 21 | 1} = 43 ;
(e) (1 : −1) : 21 = ( 21 : 21 ) = {0, ( 21 : 0) | 1, ( 21 : 1)} = {0, 21 | 1, 43 } = { 21 | 43 } = 85 .
Note that in all cases, when base and exponent are numbers, the players prefer to play
in the exponent. In the remainder of this paper, all the exponents will be positive.
One of the most important results about ordinal sums was first reported in Winning
Ways.
The Colon Principle helps prove inequalities that will be useful in this paper.
Theorem 2.4. Let G and H be numbers all of whose options are also numbers, and let
H ⩾ 0.
1. If H = 0, then G : H = G. If H > 0, then (G : H) > G.
2. GL < (G : H L ) < (G : H) < (G : H R ) < GR .
Proof. For item (1), the result follows immediately by Theorem 2.3.
For item (2), if H ⩾ 0 and all the options of G and H are numbers, then GL < G =
(G : 0) ⩽ (G : H L ) < (G : H) < (G : H R ). The second, third, and fourth inequalities hold
since H is a number and thus 0 ⩽ H L < H < H R and by applying the Colon Principle.
To complete the proof, we need to show that (G : H R ) < GR . To do so, we check that
GR − (G : H R ) > 0; in words, we check that Left can always win. Left moving first can
move in the second summand to GR − GR = 0 and win. Right moving first has several
options:
1. Moving to GR − GL > 0, since G and its options are numbers. Hence Left wins.
2. Moving to GR − (G : H RL ) > 0 by induction.
3. Moving to GRR − G : H R , but Left can respond to GRR − GR > 0 since G and its
options are numbers.
To prove that all the positions are numbers, we use results from [4]. A set of po-
sitions from a ruleset is called a hereditarily closed set of positions of a ruleset if it is
closed under taking options. This game satisfies ruleset properties introduced in [4].
In particular, the properties are called the F1 property and the F2 property, which both
highlight the notion of First-move-disadvantage in numbers and are defined formally
as follows.
Definition 2.5 ([4]). Let S be a hereditarily closed ruleset. Given a position G ∈ S, the
pair (GL , GR ) ∈ Gℒ ×Gℛ satisfies the F1 property if there is GRL ∈ GRℒ such that GRL ⩾ GL
or there is GLR ∈ GLℛ such that GLR ⩽ GR .
The game of flipping coins | 5
Definition 2.6 ([4]). Let S be a hereditarily closed ruleset. Given a position G ∈ S, the
pair (GL , GR ) ∈ Gℒ × Gℛ satisfies the F2 property if there are GLR ∈ GLℛ and GRL ∈ GRℒ
such that GRL ⩾ GLR .
Theorem 2.7 ([4]). Let S be a hereditarily closed ruleset. All positions G ∈ S are numbers
if and only if for any position G ∈ S, all pairs (GL , GR ) ∈ Gℒ × Gℛ satisfy either the F1 or
the F2 property.
3 Main results
Before considering the values and associated strategies, we consider the outcomes,
that is, we partially answer the question “Who wins the game?” The full answer re-
quires an analogous analysis to finding the values.
Proof. A Right move does not decrease the number of 1s in the position. Thus, if in G,
Left has a move, then she still has a move after any Right move in G. Consequently,
regardless of d1 , if there are an even number of 1s in G, then it will be Left who reduces
the game to all 0s. Similarly, if d1 = 1 and there are an odd number of 1s, then Left will
eventually reduce G to a position with a single 1, that is, to d1 = 1 and di = 0 for i > 1.
In this case, Right has no move and loses.
The remaining case, d1 = 0 and an odd number of 1s, is more involved. The analy-
sis of this case is the subject of the remainder of the paper. We first prove the following:
Proof. Let G be a flipping coins position. If only one player has a move, then the game
is an integer. Otherwise, let L be the Left move to change (di , dj ) from (1, 1) to (0, 0). Let
R be the Right move to change (dk , dℓ ) from (0, 1) to (1, 0). No other digits are changed.
If all four indices are distinct, then both L and R can be played in either order. In this
case, GLR = GRL . Thus the F2 property holds. If there are only three distinct indices,
then two of the bits are ones. If Left moves first, then di = dj = dk = 0. If Right moves
first, then there are still two ones remaining after his move. After Left moves, we have
di = dj = dk = 0, and hence GL = GRL . The F1 property holds.
6 | A. Bonato et al.
There are no more cases since there must be at least three distinct indices. Since
every position satisfies either the F1 or the F2 property, by Theorem 2.7 it follows that
every position is a number.
1. Set i = 0.
2. Reductions: Let α and β be binary strings, and either can be empty.
(a) If G0 = α013+j β, j ⩾ 1, then set G0 = α101j β.
(b) If G0 = α013 β with β containing an even number of 1s, then set G0 = α10β.
(c) Repeat until neither case applies; then go to Step 3.
3. If Gi is 0r 1, r ⩾ 0, or 1a 0pi 10qi 1, a ⩾ 0, and pi + qi ⩾ 0, then go to Step 5.
Otherwise, Gi = α01a 0pi 10qi 1, pi + qi ⩾ 1, a > 0, and some α. Set
Qi = 0pi 10qi 1,
Gi+1 = α01a .
Go to Step 4.
4. Set i = i + 1. Go to Step 3.
5. If Gi = 0r 1, then set vi = −r. If Gi = 1a 0pi 10qi 1, then set vi = ⌊ a2 ⌋ + 22p1i +qi . Go to Step 6.
1
6. For j from i − 1 down to 0, set vj = vj+1 : 2pj +q j −1
.
2
7. Return the number v0 .
First, we illustrate the algorithm with the following example. Consider the position
G = 10011110110110111011110011. We highlight at each step which reduction is being
applied to the underlined digits; 2(a) is denoted by †, whereas 2(b) is denoted by ‡.
The algorithm gives that
10011110110110111011110011 = 10011110110110111011110011(†)
= 100111101101101111010011(†)
= 1001111011011101010011(‡)
= 10011110111001010011(‡)
= 100111110001010011(†)
= 1010110001010011.
The game of flipping coins | 7
Step 3 partitions the last expression into 101(011)(000101)(0011) so that the ordinal
sum is given by
1 1 1 1
v0 = (( : ) : ):
2 2 64 8
10257
= .
16348
01001110110111011101 = 01001110110111011101
= 010011101110011101
= 0100111100011101
= 01010100011101.
1 1
v0 = ((−1 : ): ):1
4 32
893
=− .
1024
Theorem 3.3 (Value theorem). Let G be a flipping coins position. If v0 is the value
obtained by the algorithm applied to G, then G = v0 .
In the next section, we derive several results that will be used to prove Theo-
rem 3.3. The proof of Theorem 3.3 will appear in Section 4.1.
Corollary 4.1. Let α, β, and γ be arbitrary binary strings. We then have that α1β0γ >
α0β1γ. Moreover, for an integer r ⩾ 0, we have that β10r 1 > β.
Proof. Recall that by Theorem 3.2 all flipping coins positions are numbers. Thus The-
orem 2.2 applies.
A Right option of α0β1γ is α1β0γ, and so we have that α1β0γ > α0β1γ. Similarly, a
Left option of β10r 1 is β, and so we have that β10r 1 > β.
Next, we prove the best moves for each player. Right wants to play the zero furthest
to the right and the 1 adjacent to it. Left wants to play the two ones furthest to the right.
Theorem 4.2. Let G be a flipping coins position, where in G, r and n, r ≠ n, are the
greatest indices such that dr = dn = 1. Let s be the greatest index such that ds = 0. Left’s
best move is to play (dr , dn ), and Right’s best move is to play (ds , ds+1 ).
Proof. We prove this theorem by induction on the options. Note that we use the equiv-
alent binary representation of the game position. If there are three or fewer bits, then,
by exhaustive analysis, the theorem is true.
Let G be d1 d2 . . . dn . We begin by proving Left’s best moves. Let r and n be the two
largest indices, where dr = dn = 1, and thus dk = 0 for r < k < n. Let i and j, i < j, be two
indices with di = dj = 1. We use the notation G(di , dj , dr , dn ) to highlight the salient bits.
The claimed best Left move is from G(1, 1, 1, 1) to G(1, 1, 0, 0). This must be compared to
any other Left move, represented by moving from G(1, 1, 1, 1) to G(0, 0, 1, 1). That is, we
need to show that G(1, 1, 0, 0) − G(0, 0, 1, 1) ⩾ 0.
For the moves to be different, at least three of i, j, r, n are distinct. We first assume
that the four indices are distinct. In this case, we have that i < j < r < n. By applying
Corollary 4.1 twice we have that
or
and
If i = s, then
and
Thus D ⩾ 0.
Next, we consider Right moving in the second summand of D = G(1, 0, 0, 1)−G(0, 1, 1, 0).
Note that by the choices of the subscripts, dℓ = 1 if n ⩾ ℓ ⩾ s + 1.
10 | A. Bonato et al.
1. If n > s + 2, then Right’s best move in the second summand is to change dn−1 , dn
from (1, 1) to (0, 0). Left copies this move in the first summand, and the resulting
difference game is nonnegative by induction.
2. Suppose n = s + 2.
i. If j < s + 1, then G(di , dj , ds , ds+1 , ds+2 ) = G(0, 1, 0, 1, 1) and D = G(1, 0, 0, 1, 1) −
G(0, 1, 1, 0, 1). Right’s best move is to G(1, 0, 0, 1, 1) − G(0, 1, 0, 0, 0). Left moves
to G(1, 0, 0, 0, 0)−G(0, 1, 0, 0, 0). This is positive by Corollary 4.1, and Left wins.
For the next two subcases, exactly two 1s will occupy two of the four indexed
positions. Since Right is moving in the second summand, he is changing two
1s to two 0s. Thus Left’s best response for each case is to move in the first
summand, bringing the game to G(0, 0, 0, 0) − G(0, 0, 0, 0) = 0, and she wins.
For these cases, we only list the original position. The strategy for both cases
is as just described.
ii. If j = s + 1, then G(di , ds , ds+1 , ds+2 ) = G(0, 0, 1, 1) and D = G(1, 0, 0, 1) −
G(0, 1, 0, 1).
iii. If j = s + 2, then G(di , ds , ds+1 , ds+2 ) = G(0, 0, 1, 1) and D = G(1, 0, 1, 0) −
G(0, 1, 0, 1).
3. Now suppose n = s + 1.
i. If j < s + 1, then let ℓ < s + 1 be the largest index such that dℓ = 1.
If j < ℓ, then we have G(di , dj , dℓ , ds , ds+1 ) = G(0, 1, 1, 0, 1) and D = G(1, 0, 1, 0, 1) −
G(0, 1, 1, 1, 0). Right’s best move is to G(1, 0, 1, 0, 1) − G(0, 1, 0, 0, 0). Left moves
to G(1, 0, 0, 0, 0)−G(0, 1, 0, 0, 0), which is positive since G(1, 0, 0, 0, 0) is a Right
option of G(0, 1, 0, 0, 0).
If j = ℓ, then G(di , dj , ds , ds+1 ) = G(0, 1, 0, 1) and D = G(1, 0, 0, 1) − G(0, 1, 1, 0).
Right’s best move is to G(1, 0, 0, 1) − G(0, 0, 0, 0). Left moves to G(0, 0, 0, 0) −
G(0, 0, 0, 0) = 0, and Left wins.
ii. If j = s + 1, then G(di , ds , ds+1 ) = G(0, 0, 1) and D = G(1, 0, 0) − G(0, 1, 0). This is
positive by Corollary 4.1.
Suppose in a position that the bits of the best Right move are different from those
of the best Left move. The next lemma essentially says that the positions before and af-
ter one move by each player are equal. It is phrased in a way that is useful for reducing
the length of the position. Recall that a nontrivial position looks like G = α01a 0p 10q 1β,
where a, p, and q are nonnegative integers, and α and β are arbitrary binary strings.
For the algorithm, it suffices to prove the result for β being empty. However, it is useful,
certainly for a human, to reduce the length of the position as much as possible.
Lemma 4.3. Let α be an arbitrary binary string. If a ⩾ 0, then we have that α01111a =
α101a .
The game of flipping coins | 11
Proof. Let H = α01111a − α101a . We need to show that H = 0. To simplify the proof,
in some cases the second player will play suboptimal moves. We have several cases to
consider.
1. If a ⩾ 2, then playing the same move in the other summand is a good response.
After two such moves, we have either
or
In all cases the second player wins H thereby proving the result.
There are reductions that can be applied to the middle of the position, but extra
conditions are needed.
12 | A. Bonato et al.
Lemma 4.4. Let α and β be arbitrary binary strings where either (a) β starts with a 1, or
(b) β starts with 0 and has an even number of 1s. We then have that
α0111β = α10β.
Proof. Let H = α0111β − α10β. We need to show that H = 0. We have several cases to
consider.
1. If β is empty or β = 1a , then H = 0 by Lemma 4.3. Therefore we may assume that β
has at least one 1 and one 0.
2. If β = 1γ1 (β must end in a 1), then in both summands the best moves are pairs of
bits in β and −β. If each player copies the opponent’s move in the other summand,
then this leads to
Lemma 4.5. Let α be an arbitrary binary string. If a ⩾ 1 and p and q are nonnegative
integers such that p + q ⩾ 1, then
1
α01a 0p 10q 1 = α01a : .
22p+q−1
Proof. We prove that
1
α01a 0p 10q 1 − (α01a : ) = 0.
22p+q−1
The game of flipping coins | 13
1
Note that in Theorem 2.4 we have that playing in the base of α01a : 22p+q−1 is worse than
playing in the exponent. We have two cases to consider.
1. Left plays first in the first summand, and Right responds in the second summand,
or Right plays first in the second summand, and Left responds in the first sum-
mand. In either case, Right has a move in the exponent (moves to 0) since 2p + q −
1 ⩾ 0. In either order the final position is given by
2. Right plays first in the first summand, and Left responds in the second summand,
or Left plays first in the second summand, and Right responds in the first sum-
mand. In either case, we consider
1
α01a 0p 10q 1 − (α01a : ).
22p+q−1
1
α01a 0r 10s 1 − (α01a : ),
22p+q−2
1
α01a 0r 10s 1 = α01a :
22r+s−1
1
= α01a : 2p+q−2 .
2
1
Thus α01a 0r 10s 1 − (α01a : 22p+q−2 ) = 0.
ii. Assume that 2p + q − 1 = 0, that is, q = 1 and p = 0. The original position is
After the two moves, we have the position α01a 11 − α101a−1 (note that Left has
no move in the exponent). By Lemma 4.3, α01a 11 = α101a−1 . Hence we have
that α01a 11 − α101a−1 = 0, and the result follows.
The values of the positions not covered by Lemma 4.5 are given next.
a 1
0p 1 = −p, and 1a 0p 10q 1 = ⌊ ⌋ + 2p+q .
2 2
14 | A. Bonato et al.
Proof. Let G = 0p 1. Left has no moves, and Right has p. Note that in 1a , Left has ⌊ a2 ⌋
moves, and Right has none.
Now let G = 1a 0p 10q 1. We proceed by induction on p + q. In all cases, Left’s move
is to 1a , that is, to ⌊ a2 ⌋. If p = 0 and q = 0, then G = 1a 11, which has the value ⌊ a2 ⌋ + 210 =
⌊ a2 ⌋ + 1. Assume that p + q = k, k > 0. If q > 0, then G = {⌊ a2 ⌋ | 1a 0p 10q−1 1}. By induction
we have that
a a 1 a 1
G = {⌊ ⌋ ⌊ ⌋ + 2p+q−1 } = ⌊ ⌋ + 2p+q .
2 2 2 2 2
a a 1 a 1
G = {⌊ ⌋ ⌊ ⌋ + 2(p−1)+1 } = ⌊ ⌋ + 2p ,
2 2 2 2 2
Proof of Theorem 3.3. Let G be a flipping coins position. Step 2 reduces the binary
string. The reductions in Step 2(a) are those of Lemma 4.3 and Lemma 4.4(a). The re-
ductions in Step 2(b) are those of Lemma 4.4(b). In all cases, these lemmas show that
each new reduced position is equal to G.
In Step 3, we claim Gi ≠ β13 for any β. This is true for i = 0 by Lemma 4.3. If i > 0,
then at each iteration of Step 3, the last two 1s are removed from Gi−1 . Now the original
reduced position would be G0 = β13 γ, where γ has an even number of 1s. Lemma 4.4(b)
would apply eliminating the three consecutive 1s. Now either Gi is one of 0r 1, r ⩾ 0, or
1a 0pi 10qi 1, a ⩾ 0, pi + qi ⩾ 0, or Gi = α01a 0pi 10qi 1, pi + qi ⩾ 1, a > 0. In the latter case
the index is incremented, and the algorithm goes back to Step 3.
Step 5 applies when Step 3 no longer applies, i. e., Gi is one of 0r 1, r ⩾ 0, or
1a 0pi 10qi 1, a ⩾ 0, pi + qi ⩾ 0. Now vi is the value of Gi as given in Lemma 4.6.
Lemma 4.5 shows that for each j < i, Gj = Gj+1 : 2p1j +qj , the evaluation in Step 6.
2
Thus the value of G is v0 , and the theorem follows.
The question “Who wins 0101011111 + 1101100111 + 0110110110111 and how?” from
Section 1 can now be answered.
First, we have that
1 1 1
0101011111 = 01011011 = (01011 : ) = ((01 : ) : )
2 2 2
1 1 11
= ((−1 : ) : ) = − ,
2 2 16
The game of flipping coins | 15
1 3
1101100111 = 1101101 = (1101 : 1) = ( : 1) = ,
2 4
0110110110111 = 0110110111 = 0110111 = 0111 = 0.
3 3
01010111 + 1101100111 + 0110110110111 = − + + 0 = 0.
4 4
11 5
Her best move in the second summand gives a sum of − 16 + 8
+ 0 = − 161 , and in the
11 3 1 1
third, it gives − 16 + 4 − 8 = − 16 . Left loses both times.
5 Future directions
Natural variants of flipping coins involve increasing the number of coins that can
be flipped from two to three or more. A brief computer search suggests that the only
version where the values are numbers is the game in which Left flips a subsequence
of all 1s and Right flips a subsequence of 0s ended by a 1. We conjecture that a simi-
lar ordinal sum structure will arise in these variants. Other variants have values that
include switches, tinies, minies, and other three-stop games. However, some variants,
when the reduced canonical values are considered, only seem to consist of numbers
and switches. A more thorough investigation should shed light on their structures.
Bibliography
[1] M. H. Albert, R. J. Nowakowski, and D. Wolfe, Lessons in Play: An Introduction to Combinatorial
Game Theory, second edition, A K Peters/CRC Press, Boca Raton, FL, 2019.
[2] E. R. Berlekamp, J. H. Conway, and R. K. Guy, Winning Ways for Your Mathematical Plays, Volume
1, second edition, A K Peters, Ltd., Natick, MA, 2001.
[3] E. R. Berlekamp, J. H. Conway, and R. K. Guy, Winning Ways for Your Mathematical Plays,
Volume 3, second edition, A K Peters, Ltd., Natick, MA, 2003.
[4] A. Carvalho, M. A. Huggan, R. J. Nowakowski, and C. P. dos Santos, A note on numbers, Integers,
to appear.
[5] A. Carvalho, M. A. Huggan, R. J. Nowakowski, and C. P. dos Santos, Ordinal Sums, clockwise
hackenbush, and domino shave, Integers, to appear.
[6] J. H. Conway, The Golay Codes and The Mathieu Groups, in Sphere Packings, Lattices and
Groups. Grundlehren der mathematischen Wissenschaften (A Series of Comprehensive Studies
in Mathematics), vol 290. Springer, New York, NY, 1983.
[7] R. T. Curtis, Eight octads suffice, J. Combin. Theory Ser. A 36 (1984), 116–123.
[8] A. N. Siegel, Combinatorial Game Theory, American Math. Soc., Providence, RI, 2013.
Kyle Burke, Matthew Ferland, Michael Fisher, Valentin Gledel, and
Craig Tennenhouse
The game of blocking pebbles
Abstract: Graph pebbling is a well-studied single-player game on graphs. We intro-
duce the game of blocking pebbles, which adapts Graph Pebbling into a two-player
strategy game to examine it within the context of combinatorial game theory. Positions
with game values matching all integers, all nimbers, and many infinitesimals and
switches are found. This game joins the ranks of other combinatorial games on graphs,
games with discovered moves, and partisan games with impartial movement options.
The computational complexity of the general case is shown to be PSPACE-hard.
1 Introduction
Graph pebbling is an area of current interest in graph theory. In an undirected graph
G, a root vertex r is designated. Heaps of pebbles are placed on the vertices of G, with
a legal move consisting of choosing a vertex v with at least two pebbles, removing two
pebbles, and placing a single pebble on a neighbor of v. The goal is to pebble or place a
pebble on the vertex r. The pebbling number of G, denoted π(G), is the fewest number
of pebbles necessary so that any initial distribution of π(G) pebbles among the vertices
of G, and any vertex of G chosen as the root has a sequence of moves resulting in the
root being pebbled.
Introduced by Chung [5] in 1989, a number of results on pebbling of different fami-
lies of graphs have been found. Of note are pebbling numbers of paths, cycles [13], and
continuing work on a conjecture of Graham’s on the Cartesian products of graphs [5].
Time complexity is also known, both for determination of π(G) and for the minimum
number of moves in a successful pebbling solution, for general graphs. See [9] for a
survey of results in graph pebbling.
The results and language here are in reference to combinatorial game theory
(CGT). The nim sum, also called the digital sum, of nonnegative integers is the result of
Kyle Burke, Dept. of Computer Science, Plymouth State University, Plymouth, New Hampshire, USA,
e-mail: kwburke@plymouth.edu
Matthew Ferland, Dept. of Computer Science, University of Southern California, Los Angeles,
California, USA, e-mail: mferland@usc.edu
Michael Fisher, Dept. of Mathematics, West Chester University, West Chester, Pennsylvania, USA,
e-mail: mfisher@wcupa.edu
Valentin Gledel, Dept. of Computer Science, University Grenoble Alpes, Grenoble, France, e-mail:
valentin.gledel@univ-grenoble-alpes.fr
Craig Tennenhouse, Dept. of Mathematical Sciences, University of New England, Biddeford, Maine,
USA, e-mail: ctennenhouse@une.edu
https://doi.org/10.1515/9783110755411-002
18 | K. Burke et al.
Figure 2.1: A BRG-hackenbush position with blue, red, and green represented by thin, thick black,
and grey lines, respectively.
their sum in binary without carry. This is denoted x1 ⊕x2 if there are only two numbers,
and in the case of more, we use the notation ∑ ⊕xi . For more notation and background
on the computation of CGT game values, we refer the reader to [3, 1].
In Section 2, we introduce a two-player combinatorial ruleset based on graph peb-
bling, with subsequent sections addressing results on both impartial and partisan po-
sitions. This game involves strategic play that results in blocking the moves of one’s
opponent. Amazons is another well-known game, which also involves a notion of
blocking. However, in Amazons the blocking is always permanent (burnt square) or
temporary (queen occupies square). Due to the standard pebbling toll in Blocking
Pebbles, each pebble only has mobility for a finite time.
There are several pebbling games that appear in the literature [10, 14, 11]. The one
which is most similar to the game introduced here was originated by Lagarias and Saks
in 1989 to solve a problem of Erdős. These games do not include the nontoll moves
across an edge in the “wrong direction.” This type of move is unique to Blocking
Pebbles (as far as we are aware). There are also other pebbling games older than that
introduced by Lagarias and Saks [10]. These games bear no resemblance to Blocking
Pebbles and are used to study graph algorithm complexity.
(2,0,0)
A
B C D
(0,0,0) (2,0,0)
rationals and nimbers are achievable game values. In addition, when allowing for in-
finite positions, all real numbers and ordinals are achievable values, but switches are
not. By contrast, in blocking pebbles players may move any number of pebbles at a
single vertex within certain constraints on the graph and pebble distribution. In this
way, blocking pebbles is similar to graph nim [8, 4].
Ruleset 1. Given a tuple of the form (b, r, g) at each vertex of a directed acyclic graph
G, Left can make one of the following two moves from the vertex v.
1. Move a positive number of blue and/or green pebbles from v to an in-neighbor
of v.
2. Remove two blue and/or green pebbles from v and place one on an out-neighbor
of v and discard the other.
No blue pebbles can be moved to a vertex with a nonzero number of red pebbles. Right
has the obvious symmetric moves.
Play proceeds following the normal play convention, where the last player to make
a legal move wins.
Note that if Left removes one blue and one green pebble from v, then she may
add the green to v’s out-neighbor. However, it is always preferable to instead add the
blue as this results in a position with more blue pebbles and increases the number of
vertices blocked by Left.
As an example, consider the position in Figure 2.2. At the top is a position in block-
ing pebbles. Note that Left cannot move any blue pebbles from vertex A to B since B
already contains a red pebble. However, Left can move a single blue pebble from A to
C at a cost of one blue pebble. She can also move the one green pebble from D to C.
20 | K. Burke et al.
Theorem 3.1. For every k ∈ ℤ, there is a position in blocking pebbles with value k.
Proof. Let G be a single arc directed from u to v. If k > 0, then place 2k blue pebbles
and a single red pebble on u, and no pebbles on v. Switch red and blue pebbles if
k < 0. This allows for k-many moves for Left by moving blue pebbles from u to v, but
the presence of a red pebble on u prevents moving any blue pebbles in the reverse
direction. Zero is trivially achieved by a graph with no pebbles or by any number of
other pebble distributions.
Regarding infinitesimals, ↓ is realized by an out-star with two leaves; that is, a ver-
tex u with out-neighbors v1 , v2 . Vertex v1 has a blue pebble, and v2 has one red and one
green pebble. Left can move the blue or green pebble to u, which is simple to identify
as ∗. Right, however, can move the green to the source vertex u resulting in ∗, the red
to u resulting in zero, or both red and green pebbles to u, which is also a zero position.
The initial position is {∗ | 0, ∗} = ↓.
Due to the blocking rule, blocking pebbles is relatively unique among partisan
combinatorial games. In BRG-hackenbush the presence of a move for one player does
not inhibit moves for the other. In clobber, another two-player partisan combinato-
rial game (see [2]), the presence of a red piece actually encourages movement for Left,
and vice versa. This is a property common to all dicot games. However, in blocking
pebbles a single well-placed blue pebble, for example, can cut off many of Right’s
moves. The only other well-known ruleset with this property appears to be Amazons,
which does not allow for discovered moves. It is natural, then, that many positions
result in game values that are switches.
The game of blocking pebbles | 21
Parts (1) and (4) of the next result show that every integer switch is achievable
with a specified pebbling configuration on the out-star K1,2 .
In the following lemma, we use the following notation for a bLue/Red pebbling
configuration of the out-star K1,2 : [(a, b), [c, d], [e, f ]] is the configuration with a blue
pebbles and b red pebbles on the central vertex, c blue pebbles and d red pebbles on
one of the pendant vertices, and e blue pebbles and f red pebbles on the other pendant
vertex.
Lemma 3.2. The following results pertain to a given blocking pebbles configuration
on the out-star K1,2 .
1. For c ≥ 1, the position [(a, b), [0, c], [0, 0]] has value −⌊ b2 ⌋ if a = 1 and value {⌊ a2 ⌋ −
1 | ⌊ a−b
2
⌋ + 1} if a ≥ 2,
2. for a, b, c, d ≥ 1, the position [(a, b), [c, 0], [0, d]] has value ⌊ a−b
2
⌋,
a−1
3. for a, b, c, d, e ≥ 1, the position [(a, b), [c, 0], [d, e]] has value ⌊ 2 ⌋,
4. for a, b, c, d ≥ 1, the position [(0, 0), [a, b], [c, d]] has value {a + c − 1 | − (b + d − 1)},
5. for a, b, c ≥ 1, the position [(0, 0), [a, b], [0, c]] has value {a − 1 | − (3(b + c) − 5)},
6. for a, b ≥ 1, the position [(0, 0), [a, b], [0, 0]] has value {3a − 5 | − (3b − 5)},
7. for b ≥ 1, [(1, 0), [0, b], [0, 0]] and [(2, 0), [0, b], [0, 0]] are both zero positions.
Proof. For Case (1), the position [(1, 1), [c, 0], [0, 0]] is the zero position. It is also read-
ily checked that the position [(1, 2), [c, 0], [0, 0]] has value 0.
If b > 2, then Left has no move from [(1, b), [c, 0], [0, 0]]. From [(1, b), [c, 0], [0, 0]]
Right may move to the position [(1, b−2), [c, 0], [0, 1]], which has value ⌊ −b+1 2
⌋ = −⌊ b2 ⌋+1
by induction. Hence [(1, b), [c, 0], [0, 0]] has value −⌊ b2 ⌋ as required.
If a ≥ 2, then Left’s best move from [(a, b), [c, 0], [0, 0]] is to [(a − 2, b), [c, 0], [1, 0]],
which has value ⌊ a−2 2
⌋ (Right has no move from this position, and Left has ⌊ a−2 2
⌋
moves). Right’s only move is to [(a, b − 2), [c, 0], [0, 1]], which has value ⌊ a−b+2 2
⌋, also
by induction. Hence [(a, b), [c, 0], [0, 0]] has value {⌊ a2 ⌋ − 1|⌊ a−b
2
⌋ + 1} when a ≥ 2.
For Case (2), it is clear that the position [(1, 1), [c, 0], [0, d]] is a zero position. If
a ≥ 2, then from [(a, 1), [c, 0], [0, d]] Left has a move to [(a − 2, 1), [c + 1, 0], [0, d]], and
Right has no move. Thus [(a, 1), [c, 0], [0, d]] has value ⌊ a−1 2
⌋ by induction.
A similar argument establishes the claim that [(1, b), [c, 0], [0, d]] has value ⌊ 1−b 2
⌋.
Now if a, b ≥ 2, then from [(a, b), [c, 0], [0, d]] Left has the move to [(a − 2, b), [c +
1, 0], [0, d]], and Right has the move to [(a, b − 2), [c, 0], [0, d + 1]]. By induction we see
that [(a, b), [c, 0], [0, d]] has value
For Case (3), note that if a = 1, then there are no moves for either player; the
formula given correctly yields the game value 0. If a = 2, then from the position
[(2, b), [c, 0], [d, e]] Left has the move to [(0, b), [c + 1, 0], [d, e]]. From here Left has no
22 | K. Burke et al.
a c
move, and Right has e moves. Thus the position [(0, b), [c + 1, 0], [d, e]] has value −e.
Hence [(2, b), [c, 0], [d, e]] has value 0, as required.
If a > 2, then from [(a, b), [c, 0], [d, e]] Left can move to [(a − 2, b), [c + 1, 0], [d, e]],
which has value ⌊ a−3 2
⌋ = ⌊ a−1
2
⌋ − 1 by induction. Right has no moves from [(a, b), [c, 0],
[d, e]]. Hence [(a, b), [c, 0], [d, e]] has value
a−1 a−1
{⌊ ⌋ − 1 } = ⌊ ⌋,
2 2
as desired.
For Case (4), note that Left’s only move from [(0, 0), [a, b], [c, d]] is to [(1, 0), [a −
1, b], [c, d]]. This last position has value a + c − 1 by induction. Similarly, Right’s only
move from [(0, 0), [a, b], [c, d]] is to [(0, 1), [a, b−1], [c, d]]. This position has value −(b+
d − 1) by induction. It now follows that [(0, 0), [a, b], [c, d]] has value
{a + c − 1 | −(b + d − 1)}.
Cases (5) and (6) follow from the previous result, and Case (7) is trivial.
We now consider a transitive 3-cycle graph (Fig. 2.3) with vertices a, b, c and arcs
ab, ac, and bc. The pebbling configurations considered below are written so that the
first array entry corresponds to the source vertex a, the second corresponds to b, and
the third to the sink vertex c.
An interesting result concerning the transitive 3-cycle crops up from the somewhat
unnatural starting position where 1 blue pebble and k red pebbles occupy the same
starting vertex. Specifically, we prove the following:
Theorem 3.3. For k > 1, the pebbling configuration on the transitive 3-cycle given by
[[0, 0], [0, 0], [1, k]] has game value (3 − 3k)+k−4 .
Lemma 3.4. Consider the following pebbling configurations of the transitive 3-cycle T.
Then the position
The game of blocking pebbles | 23
[[1, 0], [0, 0], [0, 1]] [[0, 1], [0, 0], [1, 0]]
[[0, 0], [1, 0], [0, 1]] [[0, 0], [0, 1], [1, 0]]
[[0, 1], [1, 0], [0, 0]] [[0, 1], [1, 0], [0, 0]] [[0, 1], [0, 0], [1, 0]]
[[1, 0], [0, 0], [0, 1]] [[1, 0], [0, 1], [0, 0]] [[1, 0], [0, 1], [0, 0]]
[[1, 0], [0, 1], [0, 0]] [[0, 1], [1, 0], [0, 0]]
Figure 2.4: Game tree of the position [[0, 0], [0, 0], [1, 1]] on the transitive 3-cycle.
1. [[1, 0], [0, j], [0, k]] has value −3k − 2j + 2 if at least one of j or k is ≥ 1;
2. [[0, j], [1, 0], [0, k]] has value −3k − 2j + 3 if j, k ≥ 1;
3. [[0, 0], [1, 0], [0, k]] has value −3k + 3 if k ≥ 2 and value − 21 if k = 1;
4. [[0, j], [1, 0], [0, 0]] has value −2j + 3 if j ≥ 2 and value 0 if j = 1;
5. [[0, j], [0, k], [1, 0]] has value −3k − 2j + 4 if j, k ≥ 1;
6. [[0, j], [0, 0], [1, 0]] has value −2j + 4 if j ≥ 2 and value 1 if j = 1;
7. [[0, 0], [0, k], [1, 0]] has value {−2k + 2 | −3k + 5} if k ≥ 2 and value 21 if k = 1;
8. [[0, 0], [0, j], [1, k]] has value {−3k − 2j + 2 | −4k − 3j + 5} if j ≥ 2 and k ≥ 1 and value
{−3k | −4k + 3} if j = 1 and k ≥ 1;
9. [[0, j], [0, 0], [1, k]] has value {−3k − 2j + 3 | −4k − 2j + 5} if j, k ≥ 1; and
10. [[0, ℓ], [0, j], [1, k]] has value −4k − 3j − 2ℓ + 4 if j, k, ℓ ≥ 1.
Proof. All claims will be proven simultaneously using induction (on the height of the
game tree). Base cases are easily checked and left to the interested reader.
Case (1): From [[1, 0], [0, j], [0, k]] Left has no move; Right’s best move is to [[1, 0],
[0, j + 1], [0, k − 1]]. By induction this position has value −3k − 2j + 3 by (1). Hence
[[1, 0], [0, j], [0, k]] has value
{ | −3k − 2j + 3} = −3k − 2j + 2,
as desired.
Case (2): Left again has no move from the starting position. Right’s best move is to
[[0, j + 1], [1, 0], [0, k − 1]]. If k = 1, then this position has value −2j + 1 by (4); if k ≥ 2,
then this position has value −3k − 2j + 4 by (2). In either case, [[0, j], [1, 0], [0, k]] has
value −3k − 2j + 3.
Case (3): First, suppose that k ≥ 2. Left can move to [[1, 0], [0, 0], [0, k]] from
[[0, 0], [1, 0], [0, k]]. From (1), this position has value −3k + 2. Right’s best move is
24 | K. Burke et al.
to [[0, 1], [1, 0], [0, k − 1]] with value −3k + 4. Hence [[0, 0], [1, 0], [0, k]] has value
If k = 1, then Left’s only move becomes [[1, 0], [0, 0], [0, 1]], which has value
−1 by (1), and Right’s only move is to [[0, 1], [1, 0], [0, 0]] with value 0 by (4). Thus
[[0, 0], [1, 0], [0, 1]] has value − 21 .
Case (4): For j ≥ 3, Left has no move from [[0, j], [1, 0], [0, 0]], and Right can move
to [[0, j − 2], [1, 0], [0, 1]] with value −2j + 4, by (2), giving [[0, j], [1, 0], [0, 0]] the game
value of −2j + 3.
If j = 2, then Right’s move is to [[0, 0], [1, 0], [0, 1]] with value − 21 . Thus [[0, 2], [1, 0],
[0, 0]] has a value of −1(= −2 ⋅ 2 + 3).
Finally, if j = 1, then neither Left nor Right has a move from [[0, 1], [1, 0], [0, 0]],
and so its value is 0.
Case (5): Let k ≥ 2. Left has no move, and Right can move to [[0, j + 1], [0, k −
1], [1, 0]] (Right’s best move). By induction this position has value −3k − 2j + 5 giving
[[0, j], [0, k], [1, 0]] the value −3k − 2j + 4.
If k = 1, then, again, Left has no move. However, Right can move to [[0, j +
1], [0, 0], [1, 0]]. By (6) this position has value −2j + 2, thus giving [[0, j], [0, 1], [1, 0]] the
value −2j + 1.
Case (6): First, we consider the case j > 2. Left’s only move is to [[0, j], [1, 0], [0, 0]].
By (4) this position has value −2j + 3. Right’s only move is to [[0, j − 2], [0, 1], [1, 0]]. By
(5) this position has value −2j + 5. Hence, if j > 2, then [[0, j], [0, 0], [1, 0]] has value
−2j + 4.
If j = 2, [[0, 2], [1, 0], [0, 0]] has value −1 by (4). Right’s move to [[0, 0], [0, 1], [1, 0]]
has value 21 by (7). Thus [[0, 2], [0, 0], [1, 0]] has value 0.
Finally, if j = 1, then Left’s move [[0, 1], [1, 0], [0, 0]] has value 0, and Right has no
moves. Thus [[0, 1], [0, 0], [1, 0]] has value 1.
Case (7): If k ≥ 2, Left’s move to [[1, 0], [0, k], [0, 0]] has value −2k + 2 by (1). In
this case, Right’s move to [[0, 1], [0, k − 1], [1, 0]] has value −3k + 5 by (5). Therefore
[[0, 0], [0, k], [1, 0]] has value
If k = 1, then [[1, 0], [0, 1], [0, 0]] has value 0, and [[0, 1], [0, 0], [1, 0]] has value 1.
Hence [[0, 0], [0, 1], [1, 0]] has value 21 .
Case (8): First suppose j ≥ 2 and k ≥ 2. Then Left’s move to [[1, 0], [0, j], [0, k]] has
value −3k − 2j + 2 by (1). Right has two sensible moves: one to [[0, 1], [0, j], [1, k − 1]] and
The game of blocking pebbles | 25
one to [[0, 1], [0, j − 1], [1, k]]. The former has value −4k − 3j + 6 by (10), and the latter
has value −4k − 3j + 5, also by (10). Thus [[0, 0], [0, j], [1, k]] has value
Next, we look at the case where j ≥ 2 and k = 1. Left’s move to [[1, 0], [0, j], [0, 1]]
has value −2j − 1 by (1). Right’s move to [[0, 1], [0, j], [1, 0]] has value −3j + 2 by (5).
Right’s move to [[0, 1], [0, j − 1], [1, 1]] has value −3j + 1 by (10). Hence [[0, 0], [0, j], [1, 1]]
has value
We now consider the case j = 1 and k ≥ 2. Left’s only move is to [[1, 0], [0, 1], [0, k]].
This position has value −3k by (1). Right’s move to [[0, 1], [0, 0], [1, k]] has value {−3k +
1 | −4k + 3} by (9), and his move to [[0, 1], [0, 1], [1, k − 1]] has value −4k + 3 by (10).
Therefore the position [[0, 0], [0, 1], [1, k]] has value
It can be shown that the option {−3k + 1 | −4k + 3} is reversible. Hence the canonical
form of the position [[0, 0], [0, 1], [1, k]] has value
Finally, we consider the case j = 1 and k = 1. Left’s move from [[0, 0], [0, 1], [1, 1]]
to [[1, 0], [0, 1], [0, 1]] has value −3 by (2). Right’s move to [[0, 1], [0, 0], [1, 1]] has value
{−2 | −1} = − 32 . The move to [[0, 1], [0, 1], [1, 0]] has value −1 by (5). Thus the position
[[0, 0], [0, 1], [1, 1]] has value {−3 | − 32 } = −2.
Case (9): First, suppose that k ≥ 2. Then Left’s move from [[0, j], [0, 0], [1, k]] to
[[0, j], [1, 0], [0, k]] has value −3k −2j+3 by (2). Right’s best move is to [[0, j], [0, 1], [1, k −
1]] with value −4k − 2j + 5, thus giving the position [[0, j], [0, 0], [1, k]] the game value
of
If k = 1, then Left’s only move has value −2j, again by (2). Right’s move to
[[0, j], [0, 1], [1, 0]] has value −2j + 1 by (5). Hence [[0, j], [0, 0], [1, 1]] has value
Case (10): Let j = 1 and k ≥ 2. Left has no move from this starting position, and
Right has three sensible moves. Right can move to [[0, ℓ + 1], [0, 0], [1, k]] with value
{−3k − 2ℓ + 1 | −4k − 2ℓ + 3} by (9), or to [[0, ℓ + 1], [0, 1], [1, k − 1]] with value −4k − 2ℓ + 3
26 | K. Burke et al.
by (10), or to [[0, ℓ], [0, 2], [1, k]] with value −4k −2ℓ+2 by (10). The last move is optimal
for Right, and hence the position [[0, ℓ], [0, 1], [1, k]] has game value −4k − 2ℓ + 1, as
required.
If j = 1 and k = 1, then Left has no move from [[0, ℓ], [0, 1], [1, 1]], and Right has
again three sensible moves. Right’s move to [[0, ℓ+1], [0, 0], [1, 1]] has value (−4ℓ−3)/2
by (9), Right’s move to [[0, ℓ], [0, 2], [1, 0]] has value −2ℓ − 2 by (5), and Right’s move to
[[0, ℓ + 1], [0, 1], [1, 0]] has value −2ℓ − 1 by (5). Therefore the position [[0, ℓ], [0, 1], [1, 1]]
has value −2ℓ − 3.
If j ≥ 2 and k = 1, then Left has no move from [[0, ℓ], [0, j], [1, 1]], and Right has
three moves, each not costing a pebble to make: Right can move to [[0, ℓ+1], [0, j], [1, 0]]
with value −3j−2ℓ+2 by (5), Right can move to [[0, ℓ], [0, j+1], [1, 0]] with value −3j−2ℓ+1
by (5), and Right can move to [[0, ℓ + 1], [0, j − 1], [1, 0]] with value −3j − 2ℓ + 1 by (10).
Hence the position [[0, ℓ], [0, j], [1, 1]] has value −3j − 2ℓ.
Finally, if j ≥ 2 and k ≥ 2, then, as in every other subcase, Left has no move.
Right has his usual three moves: Right can move to [[0, ℓ + 1], [0, j], [1, k − 1]] with value
−4k − 3j − 2ℓ + 6 by (10), to [[0, ℓ], [0, j + 1], [1, k − 1]] with value −4k − 3j − 2ℓ + 5, or to
[[0, ℓ + 1], [0, j − 1], [1, k]] with value −4k − 3j − 2ℓ + 5. Thus [[0, ℓ], [0, j], [1, k]] has game
value −4k − 3j − 2ℓ + 4.
Proof. Left has two moves from the starting position [[0, 0], [0, 0], [1, k]]: Left can move
to [[1, 0], [0, 0], [0, k]] with value −3k + 2 by Lemma 3.4(1) or to [[0, 0], [1, 0], [0, k]] with
value −3k + 3 by 3.4(3). The latter move is clearly the optimal move for her.
There are two types of moves that Right can make: Right can move to [[0, ℓ], [0, 0],
[1, k − ℓ]], where 1 ≤ ℓ ≤ k, or to [[0, 0], [0, j], [1, k − j]], where 1 ≤ j ≤ k.
First suppose that Right moves to [[0, ℓ], [0, 0], [1, k − ℓ]], where 1 ≤ ℓ < k. This
position has value
by Lemma 3.4(9).
Next, suppose that Right moves to [[0, k], [0, 0], [1, 0]]. This position has value
−2k + 4
by Lemma 3.4(6).
We now consider the other type of move for Right. Suppose that Right moves to
[[0, 0], [0, j], [1, k − j]], where 1 < j < k. This position has value
by Lemma 3.4(8).
The game of blocking pebbles | 27
Next, suppose that Right moves to [[0, 0], [0, 1], [1, k − 1]]. This position has value
by Lemma 3.4(8).
Finally, suppose that Right moves to [[0, 0], [0, k], [1, 0]]. This position has value
by Lemma 3.4(7).
We will now show that the move to [[0, 0], [0, 1], [1, k − 1]] is Right’s optimal move.
First note that since {0 | −k + 4} ≤ 1, it follows that
3
(−3 ⋅ 2 + 3) + {0 | −2 + 4} = −2 < − = (−2 ⋅ 2 + 2) + {0 | −2 + 3}.
2
Hence
for k ≥ 2.
To show that
it suffices to show that {ℓ | −k +2ℓ+2}+{k −4 | 0} > 0. To this end, note that Left’s move
to ℓ + {k − 4 | 0} is a winning first move. Right’s move to −k + 2ℓ + 2 + {k − 4 | 0} leads to
(−k + 2ℓ + 2) + (k − 4) = 2ℓ − 2 ≥ 0 after Left’s response. Right’s move to {ℓ | −k + 2ℓ + 2} + 0
is not better, leading to ℓ + 0 = ℓ ≥ 1.
Our last task is showing that
It now follows that the value of the position [[0, 0], [0, 0], [1, k]] is
In the table below, we present, without proof, other interesting game values
achievable as bLue/Red Blocking Pebbles positions.
We end this section with a short discussion of the differences between blocking peb-
bles and BRG-hackenbush.
As noted above, the blocking mechanic of blocking pebbles results in a prepon-
derance of switches, whereas BRG-hackenbush has no such positions. Also, while we
would be surprised to find a dyadic that is not the game value for some blocking peb-
bles position, we have found many dyadic rationals difficult to construct, even with
the use of computational methods. BRG-hackenbush positions, on the other hand,
are easily constructed that have rational noninteger game values.
Proof. We will demonstrate this using induction on g0 . First, note that if g0 = 0, then
any move of a green pebble to the center from a leaf, resulting in the loss of a pebble,
The game of blocking pebbles | 29
can be countered by returning it to the same leaf. Next, we note that any move from
⟩g0 , g1 , . . . , gn ⟨ results in a change to g0 and that there is a move from this position that
results in any number of pebbles on the center node strictly less than g0 . Hence the
in-star is equivalent to a nim heap of size g0 .
The nim dimension of a ruleset is the greatest integer k where a position in the
ruleset has value ∗2k−1 but no position has value ∗2k . A ruleset in which the nim di-
mension is unbounded is said to have infinite nim dimension, as Santos and Silva [6]
showed is true for konane. Theorem 3.5 implies that green blocking pebbles also
has infinite nim dimension, whereas the nim dimension of blue-red blocking peb-
bles is still unknown.
The fact revealed in Theorem 3.5 that an in-star is equivalent to a single nim heap
can be generalized to multiple heaps with an out-star.
Theorem 3.6. The value of an out-star with distribution ⟨g0 , g1 , . . . , gn ⟩ is ∗(∑ni=1 ⊕gi ),
that is, the nim sum of all heaps.
Proof. We note that this game is analogous to nim, except that instead of removing
pebbles from a heap, they are moved to the center at no cost. The player with the ad-
vantage simply plays the winning Nim strategy. Any move of a pebble from the center
vertex to a leaf can immediately be reversed at a net cost of one pebble from the cen-
ter. Thus the number of pebbles at the center does not contribute to the game value,
which equals the nim sum of the leaf heaps.
Theorem 3.7. If (g1 , . . . , gn ) is a distribution of green pebbles along a path directed left
to right, then the game value is ∗(∑ ⊕g2k ).
Proof. An empty path is trivial, so let us assume that the claim is false and consider
the set C of all counterexamples with the fewest total number of pebbles. From C
let (g01 , g02 , . . . , g0n ) be the last when ordered lexicographically. Any move from this
position either decreases the total number of pebbles or increases its lexicographic
position. Therefore all options of (g01 , g02 , . . . , g0n ) are outside C, and hence the claim
holds for them. Since each has a digital sum of even terms that differs from (∑⊕ g2k ),
and all smaller sums are realized through nim moves on the even heaps, we see
that (g01 , g02 , . . . , g0n ) also satisfies the claim. Therefore, C is empty, and the claim is
true.
Note that in Theorems 3.5, 3.6, and 3.7 the strategy is equivalent to nim. In fact, in
these particular cases, blocking pebbles is very similar to the game of poker nim,
wherein players make nim moves but retain any removed pebbles, and may add them
to a heap instead of removing. Although poker nim is loopy and blocking pebbles
is not, both games played optimally have the same strategy and the same reciprocal
moves for non-nim moves.
30 | K. Burke et al.
Figure 2.5: An oriented tree T and the resulting graph D(T ) from Construction 3.8.
We now introduce a reduction formula for all trees, which can be applied to the three
previous results.
Construction 3.8. Let T be any oriented tree with a given distribution of green peb-
bles, let S be its set of source vertices, and let O be the set of vertices of T reachable by
an odd length directed path from some vertex in S. Additionally, for a given subset W
of vertices, let p(W) be the combined total number of pebbles on W.
We construct the following digraph D(T) from T as follows (see Figure 2.5):
1. V(D(T)) = {σ} ∪ O, where O has the same pebbling distribution as it does in T, and
σ is an vertex with no pebbles.
2. E(D(T)) = E(O) ∪ {σ → θ | θ ∈ O}.
Proposition 3.9. The game value of blocking pebbles on T is equal to the game value
on D(T).
Proof. The key observation is that the pebbling games on T and D(T) are both equiv-
alent to poker nim on the set O. Since the two games have the same set of nim moves,
their game values are equivalent.
In particular, the transitive triple graph, a K3 oriented without a cycle (Fig. 2.3),
has proven very difficult to analyze. However, we present here the set of 𝒫 -positions.
Theorem 3.10. A position in blocking pebbles on a transitive triple with g1 green peb-
bles on the source vertex, g3 on the sink, and g2 on the remaining vertex is a 𝒫 -position
if and only if g2 = g3 .
Proof. Note that, as in all other green-only positions, pebbles on the source vertex are
superfluous. Since any move that increases the total g2 + g3 can be undone, we can
consider these heaps as nim heaps and play accordingly.
We close this section with a very simple result, but one that may prove useful in
future investigations into the game.
Theorem 3.11. A single green pebble on the sink node of a transitive tournament on n
vertices is equivalent to a nim heap of size n.
Proof. We simply consider all options of this position. Since the pebble can only move
back and can move to any previous node, this is equivalent to removing any number
of stones from a nim heap.
True player, on their turn, sets one unassigned variable to true. On the False player’s
turn, they set one unassigned variable to false. After all variables are set, the True
player wins if the CNF evaluates to true; otherwise, False wins. Positive CNF is
PSPACE-hard1 [16].
To reduce from Positive CNF to Blocking Pebbles, we need to have both a way
for the players to alternate setting variables and a way for the evaluation of the CNF
to determine the winner of the game. We achieve this using three different gadgets.
We have a gadget for the players to set the variables (the Variable Gadget, Fig. 2.6), a
gadget for the False player to select a clause they believe they have falsified (the Clause
Gadget, Fig. 2.7), and a gadget that allows the True player to win if no clause is falsified,
but allows False to win if they do falsify a clause (the Goal Gadget, Fig. 2.8).
We consider the following properties of the formula in the Positive CNF position:
– The formula uses n variables and has m clauses.
– The ith clause contains Li unique literals.
The first gadget to describe is the variable gadget. See Fig. 2.6 for an example. The first
pebble moved onto the vertex labeled xi corresponds to that player choosing the vari-
able xi in Positive CNF. If Red (True) moves there, then all pebbles in the gadget are
unable to move for the remainder of the game. However, if Blue (False) moves there,
then they can later move the pebble down with the other pebble, then down one of
the paths to a single clause vertex (this will also give two moves for Red).
These clause paths are long to prevent players from traversing back “upward”
later with a cache of tokens on the clauses.
Each clause gadget (see Fig. 2.7) includes a vertex Ci connected to the Li unique
literals in that CNF clause via these paths as shown in our variable gadget, and g is
part of the goal gadget (Fig. 2.8). We want to ensure that if and only if Blue moves in
each of the variables in Lj , then they can get exactly one pebble to g.
We enforce this by requiring Blue to accumulate exactly a power of two (2k ) peb-
bles on the clause vertex Cj , so they can push those pebbles down a path of length k
to reach g. If there are Lj literals in clause j, then the path needs to require ⌈log2 (Lj )⌉
moves to reach g, meaning that there need to be ⌈log2 (Lj )⌉ − 1 vertices on the path
between Cj and g. In the many cases where Lj is not a power of 2, we have to start
with extra blue pebbles on the clause vertex to “round it up”; this is just f (Lj ), where
f (n) = 2⌈log2 (n)⌉ − n.
We will give Red (True) a single red pebble on our goal gadget that can traverse
a path of length n ⋅ (5 + ⌈log2 (maxi {Li })⌉) + 2 ∑i f (Li ) − n. If Blue cannot reach g, then
(as we will prove later) Red will win with this path. If Blue can reach g, however, then
they can follow Red down that long path.
1 At the original time of submission, Positive CNF was known to be hard for 11-CNF formulas [16].
Since then, an impressive improvement was discovered, showing that it is hard even on 6-CNF formu-
las [15].
The game of blocking pebbles | 33
1 xi 1
1
⌈log2 (maxi {Li })⌉
.. .. .. .. ..
. . . . .
Ca Cb Cc Cd Ce
Figure 2.6: Example of a variable gadget for variable xi . Here xi is included in clauses Ca , Cb ,
Cc , Cd , and Ce , a subset of all the clauses; Li is the number of unique literals in clause Ci , so
⌈log2 (maxi {Li })⌉ is enough that even if a clause gets all the blue pebbles from the variables, it will
not be enough to go back deep into the variable gadget.
⌈log2 (Lj )⌉ − 1
Cj g
Lj f (Lj ) ⋅⋅⋅
Figure 2.7: Clause Gadget for a single clause Cj with Lj unique literals. f (Lj ) is the number needed to
push Lj to the next power of two, so f (Lj ) = 2⌈log2 (Lj )⌉ − Lj .
34 | K. Burke et al.
1 ∙∙∙
Figure 2.8: Goal Gadget. This gives Red more moves than Blue if Blue cannot reach vertex g. If Blue
is able to reach g, then they can follow Red down the path and win.
If Blue can reach g from one of the clause vertices, they can use the goal gadget to
follow Red’s single pebble down the path. Red will be forced to traverse down this
path before Blue arrives, since Red’s only other pebbles are in the variable gadget,
where they can move in once or twice, depending on if Blue activates the variable or
not. It will take Blue at least two moves for each variable they activate to reach the goal
node, so Red’s pebble on the goal gadget will never get in the way of Blue’s, so long
as Blue activates at least as many variables as Red.
Lemma 4.1. When Blue makes a move on k variables gadgets, then no matter which
player wins the game, the number of moves Red can make is at least n ⋅ (5 +
⌈log2 (maxi {Li })⌉)+2 ∑i f (Li )−n and no more than n⋅(5+⌈log2 (maxi {Li })⌉)+2 ∑i f (Li )+k.
Proof. For the first part, clearly, Red can always make n ⋅ (5 + ⌈log2 (maxi {Li })⌉) +
2 ∑i f (Li ) − n moves by playing on the goal gadget, no matter what else may be hap-
pening in the game.
For the upper bound, Red can only move once on variable gadgets they assigned
and twice on variable gadgets that Blue assigned. This will give them up to 2k + n − k =
n + k moves total from variable gadgets.
In total, Red has
Proof. We complete the proof by showing that the transformation described results in
a proper reduction. In other words, that Red/True wins going first in the Positive CNF
instance if and only if Red/True wins going first in the resulting Blocking Pebbles
position. For notational simplicity, we will let X = ⌈log2 (maxi {Li })⌉. We will refer to
variable assignment in our reduction: this corresponds to a player moving a pebble of
their color onto the corresponding xi .
The game of blocking pebbles | 35
[⇒] Assume that Red/True wins moving first in the Positive CNF game. Then Red
has a strategy to prevent Blue from falsifying all variables in any one clause. Red may
follow the strategy of the corresponding game of Positive CNF. Whenever Blue does
not make a variable assignment when there are still unassigned variable gadgets re-
maining, Red may assign a variable arbitrarily. Similarly, when Red’s response in the
Positive CNF game has already been made, they may also play arbitrarily. Thus, once
all variables are assigned, for every true assignment in the corresponding Positive
CNF, there is a red pebble moved on the corresponding variable gadget (and perhaps
on other variable gadgets as well). Then, by construction, Blue cannot accumulate
enough Blue pebbles on any Ci vertex of a clause gadget to reach the goal gadget.
On the clause gadgets, without moving any pebble to the goal gadget, Blue may
make several moves from a pile of pebbles by repeatedly dropping one pebble to move
another pebble down an arc, then moving that pebble back to the pile. This strategy
produces 2(n − 1) moves from a pile of n ≥ 1 pebbles. Note that we cannot get any
more moves than this from a clause pile since we are unable to move a pebble to any
sufficiently larger sequence of in-neighbor moves. So we obtain an upper bound on
the number of moves for Blue on the clause gadgets by assuming that all their blue
pebbles are on a single clause vertex:
2(n + ∑ f (Li ) − 1)
i
In total:
Blue moves ≤ n (claiming variables)
+ n ⋅ (X + 2) (moving claimed variables to clauses)
m
+ 2(n + ∑ f (Li ) − 1) (back-and-forth moves in clauses)
i=1
m
= n ⋅ (X + 5) + 2 ∑ f (Li ) − 2
i=1
m
< n ⋅ (X + 5) + 2 ∑ f (Li )
i=1
makes True one of the variables Blue has already pretended they claimed, then Blue
will choose yet another remaining variable, pretend Red makes that True, and again
choose the appropriate winning response.
Since Blue has a winning strategy in the Positive CNF position, this will result in
all the variable gadgets for at least one clause being claimed by Blue pebbles.
Thus, in the Blocking Pebbles board resulting from our transformation, Blue has
a strategy to get enough Blue pebbles onto at least one Ci vertex to have a pebble reach
the goal vertex.
Now let us count the number of moves they will have:
Blue moves ≥ ⌊n/2⌋ (claiming variables)
+ ⌊n/2⌋ ⋅ (X + 2) (moving claimed variables to clauses)
Thus Blue will have more moves than Red and will win in the Blocking Pebbles
position.
Theorem 4.2 gives us the hardness for the general game on graphs, but, as with
many results like this, it is likely not the final word on the matter for two reasons. First,
it is not clear at this point what graph structure(s) for actual play are. So the range of
this reduction may not line up with real-world competition.
Second, it is simultaneously possible that the game cannot be solved in poly-
nomial space and may be hard for a more difficult complexity class (a superset of
PSPACE, e. g., EXPTIME). One reason for this is that the sizes of the pebble-piles could
be exponential in the size of the position description. Additionally, since the graph
could contain cycles, the game is loopy, which can often lead to EXPTIME-hardness.
Improvements to this result could include:
– Algorithms showing that a player can avoid our constructions from given starting
positions (while maintaining the winnability for a player);
– Reductions to more structured graphs than in the range of our reduction; or
– Reductions from supersets of PSPACE.
The game of blocking pebbles | 37
5 Further directions
There remain many open questions and avenues for further study of blocking peb-
bles. In particular, we would like to resolve the question of game values for all-green
games. As we have mentioned, it has proven difficult to determine these values when
the underlying graph contains cycles.
Through the use of computational software, in particular, CGSuite [17], we have
been able to find positions with many dyadic game values. It remains an open question
whether or not there is a dyadic rational 2ab that is not the value of any position in
blocking pebbles.
With regards to computational complexity, in addition to the potential improve-
ments listed in that section, the computational hardness of Green Blocking Pebbles
remains an open problem.
Bibliography
[1] M. Albert, R. Nowakowski, and D. Wolfe, Lessons in Play: An Introduction to Combinatorial
Game Theory, CRC Press, 2007.
[2] M. Albert, J. Grossman, R. Nowakowski, and D. Wolfe, An introduction to clobber, Integers 5(2),
(2005).
[3] E. Berlekamp, J. Conway, and R. Guy, Winning Ways for Your Mathematical Plays, volumes 1–4,
AK Peters Natick, 2003.
[4] N. Calkin, K. James, J. Janoski, S. Leggett, B. Richards, N. Sitaraman, and S. Thomas, Computing
strategies for graphical Nim, Congr. Numer. 202 (2010), 171–185.
[5] F. Chung, Pebbling in hypercubes, SIAM J. Discrete Math. 2(4) (1989), 467–472.
[6] C. dos Santos and J. Silva, Konane has infinite nim-dimension, Integers 8(1), #G2, 6 pp.
[7] S. Faridi, S. Huntemann, and R. Nowakowski, Games and complexes I: Transformation via
ideals, in Games of No Chance 5, MSRI Publications, vol. 70, p. 285 (2019).
[8] M. Fukuyama, A nim game played on graphs, Theoret. Comput. Sci. 1–3(304) (2003), 387–399.
[9] G. Hurlbert, A survey of graph pebbling, Congr. Numer. 139 (1999), 41–64.
[10] L. Kirousis and C. Papadimitriou, Searching and pebbling, Theoret. Comput. Sci. 47 (1986),
205–218.
[11] Q. Liu, Red-Blue and Standard Pebble Games: Complexity and Applications in the Sequential
and Parallel Models, Master’s thesis, MIT, Cambridge, MA, 2017.
[12] R. Milley and G. Renault, Dead ends in misère play: the misère monoid of canonical numbers,
Discrete Math. 313(20) (2013), 2223–2231.
[13] L. Pachter, H. Snevily, and B. Voxman, On pebbling graphs, Congr. Numer. 107 (1995), 65–80.
[14] M. Prudente, Two-Player Graph Pebbling, PhD thesis, Lehigh University, Bethlehem, PA, 2015.
[15] M. Rahman and T. Watson, 6-uniform maker-breaker game is PSPACE-complete, in 38th
International Symposium on Theoretical Aspects of Computer Science (STACS 2021), Schloss
Dagstuhl-Leibniz-Zentrum für Informatik (2021).
[16] T. Schaefer, On the complexity of some two-person perfect-information games, J. Comput.
System Sci. 16(2) (1978), 185–225.
[17] A. Siegel, Combinatorial Game Suite: An open-source program to aid research in combinatorial
game theory, http://cgsuite.sourceforge.net (2004).
Kyle Burke, Matthew Ferland, and Shang-Hua Teng
Transverse Wave: an impartial
color-propagation game inspired by social
influence and Quantum Nim
Abstract: In this paper, we study Transverse Wave, a colorful, impartial combinato-
rial game played on a two-dimensional grid. We are drawn to this game because of its
apparent simplicity, contrasting intractability, and intrinsic connection to two other
combinatorial games, one about social influences and another inspired by quantum
superpositions. More precisely, we show that Transverse Wave is at the intersection
of two other games, the social-influence-derived Friend Circle and superposition-
based Demi-Quantum Nim. Transverse Wave is also connected with Schaefer’s logic
game Avoid True from the 1970s. In addition to analyzing the mathematical structures
and computational complexity of Transverse Wave, we provide a web-based version
of the game. Furthermore, we formulate a basic network-influence game, called De-
mographic Influence, which simultaneously generalizes Node-Kayles and Demi-
Quantum Nim (which in turn contains Nim, Avoid True, and Transverse Wave as
particular cases). These connections illuminate a lattice order of games, induced by
special-case/generalization relationships, fundamental to both design and compara-
tive analysis of combinatorial games.
Acknowledgement: We thank the anonymous reviewers for helpful suggestions. This research was
supported in part by an NSF grant (CCF-1815254) and by a Simons Investigator Award.
Kyle Burke, (c/o Christian Roberson), Department of Computer Science, Plymouth State University,
Plymouth, New Hampshire, USA, URL: https://turing.plymouth.edu/~kgb1013/
Matthew Ferland, Shang-Hua Teng, Department of Computer Science, University of Southern
California, Los Angeles, California, USA, e-mail: mferland@usc.edu; URL:
https://viterbi-web.usc.edu/~shanghua/
https://doi.org/10.1515/9783110755411-003
40 | K. Burke et al.
Ruleset 1 (Transverse Wave). For a pair of integer parameters m, n > 0, a game po-
sition of Transverse Wave is an m by n grid G, in which the cells are colored either
green or purple.4
For the game instance starting at this position, two players take turns selecting a
column of this colorful grid. A column j ∈ [n] is feasible for G if it contains at least one
green cell. The selection of j transforms G into another colorful m by n grid G ⊗ [j] by
recoloring column j and every row with a purple cell at column j with purple.5 In the
normal-play convention the player without a feasible move loses the game.
0 1 2 3 0 1 2 3 0 1 2 3
a b c
Figure 3.1: An example move for Transverse Wave. a: A position from which the first player chooses
column 2. (This is a legal choice, because column 2 has a green cell.) b: Indigo cells denote those
that will become purple. These include the previously green cells in column 2 as well as the green
cells in rows where column 2 had purple cells. c: The new position after all cells are changed to be
purple.
Note that purple cells cannot change to green, and each move changes all green cells
in one column to purple (and possibly some in other columns as well). Thus any po-
sition with dimension m by n must end in at most n turns, and the height of a Trans-
verse Wave game tree is at most n. Consequently, Transverse Wave is solvable in
polynomial space.
Note also that the selection of column j could make some other feasible columns
infeasible.6 In Section 4, we will show that the interaction among columns intro-
duces sufficiently rich mathematical structures for Transverse Wave to efficiently
encode any PSPACE-complete game such as Hex [17, 22, 12, 23], Avoid True [25],
Node Kayles [25], Go [21], or Geography [21]. In other words, Transverse Wave is a
PSPACE-complete impartial game.
We have implemented Transverse Wave in HTML/JavaScript.7
By mapping purple cells to Boolean 0 (i. e., false) and green cells to Boolean 1
(i. e., true) we see the following proposition.
the (i, j)th entry is set to 1. Under normal play, the player with no feasible column to
choose loses the game.
By mapping purple cells to Boolean 1 and green cells to Boolean 0 we see another
isomorphism.
Theorem 1 (Pascal-Like Nimber Triangle). Let p be the number of rows that are not all-
purple, let k be the number of rows with odd parity, and let q = 0 if there are an even
number of columns with only green tiles and 1 otherwise. We define G′ to be
{
{ 0 (k is even, and p > 2k) or (k is odd, and p < 2k),
′ {
G = {∗ (k is even, and p < 2k) or (k is odd, and p > 2k),
{
{
{∗2, p = 2k.
If G is a Transverse Wave position where for every column we can select, there is no
more than one purple tile (discounting rows with only purple tiles), then G = G′ + ∗q.
Proof. Note that all-green columns cannot be fully colored purple by a move from an-
other column, as that other column would either have to be all-purple (so it can not
44 | K. Burke et al.
0 1 2 3 0 1 2 3
a b
Figure 3.2: Two Transverse Wave positions where we can apply Theorem 1. In (a), p = 3, k = 3, and
q = 1, so G = G′ + ∗q = ∗ + ∗ = 0. In (b), there is one all-purple row and one all-purple column, which
we ignore/remove. Then p = 2, k = 1, and q = 0, so G = G′ + ∗q = ∗2 + 0 = ∗2.
be chosen) or it would have to have green cells in a row where the all-green column
is purple (then the all-green column is not all-green). This means that each all-green
column contributes an additive ∗ to the game; it can be replaced with a single inde-
pendent move.
We claim that G′ is the game value without including the all-green columns, and G
is the game value with them.
In the beginning, each column has at most one purple cell. Thus the difference
between the number of purple cells in each column is at most one. After each play,
either only the cells in the chosen column become purple, or it will additionally turn
one extra entire row purple.
Assuming that G′ is as we claim, it is not difficult to see that G is correct. As previ-
ously noted, each all-green column is just a ∗, so G = G′ if there are an even number
of all-green columns, and G = G′ + ∗ if there are an odd number of them.
We have an illustrative triangle of cases of G′ of up to eight rows in Figure 3.3.
If the player chooses a column where the purple tile is in an odd parity row, then an
even number (2t) of other (playable) columns also have a single purple cell in that row;
all of those columns become all-green. As mentioned above, each of these columns
additively contributes ∗ to the value, for a total of 2t×∗ = 0. Thus the resulting option’s
value is just the same as one with p − 1 rows and k − 1 rows with odd parity. This is just
the value above and left in the triangle previously referenced.
If the player instead chooses a column where the purple tile is in a row with even
parity, then an odd number (2t +1) of other rows also have that single purple cell. Thus
the result is (2t + 1) × ∗ = ∗ added to the option. Thus the resulting game is the value
Transverse Wave: an impartial color-propagation game | 45
# odds
0
1
# rows 2
1 0 0 3
2 0 ∗2 ∗ 4
3 0 ∗ ∗ 0 5
4 0 ∗ ∗2 0 ∗ 6
5 0 ∗ 0 0 ∗ 0 7
6 0 ∗ 0 ∗2 ∗ 0 ∗ 8
7 0 ∗ 0 ∗ ∗ 0 ∗ 0
8 0 ∗ 0 ∗ ∗2 0 ∗ 0 ∗
Figure 3.3: A Pascal-like Nimber Triangle of the values of Transverse Wave positions, where each
column has no more than 1 purple tile. The levels of the triangle are the number of rows in the game
board (with at least one purple tile), and the diagonal columns marked indicate the number of odd
parity rows. Each entry can be determined from the two above by taking the mex of the above-left
entry and ∗+ of the above-right entry.
above and right in the triangle (the same number of odd rows and one less row overall)
plus ∗.
By inspection note that Table 3.1 represents the correct value for each possible pair
of parents in the game tree.
Now we have five cases, which invoke those table cases.
1. k even and p > 2k: case a, b, h, or j (thus the value is 0)
Table 3.1: Values for a game with a given position based on what is above and left in the triangle and
what is above and right. The position has value based on options to the above right plus ∗ and to the
above left.
a 0
b 0 0
c 0 ∗
d 0 0 ∗2
e 0 ∗ ∗
f 0 ∗2 ∗
g ∗ 0
h ∗ 0 0
i ∗ ∗ ∗2
j ∗ ∗2 0
k ∗2 0 0
l ∗2 ∗ ∗
46 | K. Burke et al.
We prove the correctness of these cases by induction on the number of rows. Our base
case, a single row, holds by inspection.
For our inductive case, we assume that it holds for row p − 1.
We first show that it holds when k is even and p > 2k (case 1 in our list). This is
either the base case (case a), or there are 0 odd rows (case b), or k is some other even
number less than 21 p. In this last case the left parent will have k − 1 with p − 1 > 2(k − 1),
which, by induction, must be ∗. Then the right parent has p − 1 ≥ 2k and thus is either
0 or ∗2. Thus it must be either case h or j.
Now we examine case 2 in our list, where k is odd and p < 2k. This is either the
base case (case a), or k = p (which must be case g, since it only has a single left parent
with even k), or it is some other odd k such that p < 2k. Then it has a right parent with
odd k and p < 2k, which is inductively 0. The left parents have even k and p−1 ≤ 2(k−1),
thus a left parent is either ∗ or ∗2, which are cases h and k, respectively.
For case 3 in our list, if k is even and p < 2k, then either k = p, in which case it has
a single left parent in case 2, which is case c, or it is some other even k with p < 2k. In
that case the right parent will have the same k, be in case 3, and have value ∗. The left
parent will have k − 1 and n − 1 ≤ 2(k − 1), and thus be 0 (case e) or ∗2 (case l).
For case 4, if k is odd and p − 1 > 2(k − 1), then we know that the left parent has
p > 2(k − 1), which is 0 (since it is case 1). The right parent has p − 1 ≥ 2k and is thus 0
(case e) or ∗2 (case f).
Finally, in case 5, if p = 2k, then k can be either odd or even. If it is odd, then the
left parent has even k and p − 1 > 2(k − 1) and is thus 0, and the right parent has even k
and p < 2k and is thus 0, putting this in case d. If k is even, then the left parent is odd
and thus ∗, and the right parent is even and thus ∗, putting us in case 1.
For more general Transverse Wave positions, we provide examples with values
up to ∗7, as shown in Table 3.2.
These results can be extended to the related rulesets that are exact embeddings
of this, e. g., Avoid True and Demi-Quantum Boolean Nim. We discuss these trans-
formations in more detail later.
Although we can only characterize Transverse Wave in very special cases, the
Pascal-like formation of these game values provides us a glimpse of a potential elegant
structure. Something interesting to explore in the future is whether we can cleanly
characterize the game values when the game is restricted to have only two purple tiles
in each column. If so, then we would also like to see how large of the parameter we
can characterize to encounter intractability. Answering these questions can also tell
us about the values for the other related games.
Transverse Wave: an impartial color-propagation game | 47
Table 3.2: Instances of values up to ∗7. The shorthand uses parentheses to indicate rows and num-
bers to indicate purple columns. So, for example, in the ∗3 case the first row has columns 0 and 1
colored purple, row 2 has column 2 colored purple, and column 3 has no purple cells.
0 (0)
∗ (0) (01)
∗2 (01) (2)
∗3 (01) (2) 3
∗4 (01) (234) (035)
∗5 (01) (234) (035) 6
∗6 (012) (034) (0156) (2578)
∗7 (012) (034) (0156) (2578) 9
2 Connection to a social-influence-inspired
combinatorial game
Because mathematical principles are ubiquitous, combinatorial game theory is a field
intersecting many disciplines. Combinatorial games have drawn wide inspiration
from logic [25] to topology [22], from military combat [13, 21] to social sciences [8], and
from graph theory [25, 2] to game theory [7]. Because the field cherishes challenging
games with simple rulesets and elegant game boards, combinatorial game design
is also a distillation process, aiming to derive elementary moves and transitions to
capture the essence of complex phenomena that inspire the designers.
In this and the next sections, we discuss two other rulesets whose intersection
contains Transverse Wave. Although they have found a common ground, these
games were inspired by separate research fields. The first game, called Friend Circle,
is motivated by viral marketing [24, 20, 9], whereas the second, called Demi-Quantum
Nim, was inspired by quantum superpositions [18, 10, 5]. In this section, we first focus
on Friend Circle.
v1 v1
t f t t
v2 v3 v2 v3
f f t f
v4 v5 v4 v5
f f
Figure 3.4: Example of a Friend Circle move. In the position on the left, let the seed set S =
{v1 , v2 , v3 , v4 }, all of which are acceptable to choose because they all have an incident false edge.
If a player chooses v2 , then the result is the right-hand position. In the second position, v2 has had
all of its incident edges become true. In addition, since (v1 , v2 ) was true, all incident edges of v1 have
also changed to true. The altered edges in the figure are represented in bold. Note that in the result-
ing position, the next player can only choose to play at either v3 or v4 , as v1 and v2 have only true
edges and v5 ∉ S.
up the game position. In Friend Circle, only the broadcast-type of interaction is ex-
ploited. We will use the following traditional graph-theory notation: In an undirected
graph G = (V, E), for each v ∈ V, the neighborhood of v in G is NG (v) = {u | (u, v) ∈ E}.
Ruleset 4 (Friend Circle). For a ground set V = [n] (of n people), a Friend Circle
position is defined by a triple (G, S, w) where the properties are as follows.
– G = (V, E) is an undirected graph. An edge between two vertices represents a
friendship between those people.
– S ⊂ V denotes the seed set.
– w : E → {f, t} (false and true) represents whether those friends have already spo-
ken about the target product (with at least one recommending it to the other).
To choose their move, a player (a viral marketing agent) picks a person from the seed
set v ∈ S such that there exists e = (v, x) ∈ E where w(e) = f. This represents choosing
someone who has not spoken about the product to at least one of their friends.
The result of the move choosing v is a new position (G, S, w′ ), where w′ is the same
as w except that for all x ∈ NG (v):
– w′ ((v, x)) = t, and
– if w((v, x)) = t, then w′ ((x, y)) = t for all y ∈ NG (x).
When the selected vertices form a maximal independent set of G, the next player
cannot make a move and hence loses the game. It is well known that Node-Kayles is
PSPACE-complete [25].
v t t
t
v tv
t f
t
v and NG0 (v)
Figure 3.5: Example of the reduction from Node Kayles to Friend Circle. On the left, there is a Node
Kayles vertex and its neighborhood. On the right, there are those same vertices, along with tv with
t-weights on all the old edges and f on the new edge with (v, tv ).
Note that by varying w : E → {f, t} we can realize any |V1 | × |V2 | Boolean matrix
with AG . Thus Friend Circle generalizes Transverse Wave.
Proof. Imagine these two games are played in tandem. We map the selection of a ver-
tex v ∈ S = V1 to the selection of the column associated with v in the matrix AG of G.
Because G is a complete bipartite graph, v is feasible for Friend Circle if there exists
u ∈ V2 such that w(u, v) = f. Thus the column associated with v in AG is not all 1s.
This is precisely the condition for v to be feasible in Crosswise OR over AG . The Direct
Influence at v in Friend Circle over G changes all edges of v to t and the subsequent
Cascading Influence on initially t neighbors of v in V2 is isomorphic to crosswise ORs.
Thus Friend Circle on G is isomorphic to Crosswise OR over AG .
V1 V2
2 a
1 2 3 4 5 6
3 b a
4 c b
5 d c
d
6
Figure 3.6: On the left, there is a Friend Circle position on the complete bipartite graph between V1
and V2 , where the seed set S = V1 . Instead of labeling edges, we have removed all false edges and
include only true edges. On the right, there is the equivalent Transverse Wave position. The purple
cells correspond to the (true) edges in the bipartite graph.
To readers familiar with combinatorial game theory, it may seem odd that we ex-
plicitly define the description of moves in the game. However, it is integral for play-
ing quantum combinatorial games, as move description affects quantum collapse. For
more information, see [5].
Transverse Wave: an impartial color-propagation game | 53
(2, 2)
𝒩
⟨ (−1, 0)
| (0, −1) ⟩
⟨ (1, 2)
⟨ (−1, 0) | (2, 1) ⟩ ⟨ (−1, 0)
| (0, −1) ⟩
| (0, −2) ⟩
(−1, 0) 𝒫 ⟨ (−1, 0)
| (−2, 0) ⟩
(−2, 0)
⟨ (0, 2) ⟨ (0, 2) ⟨ (0, 2)
⟨ (0, 2)
| (1, 0) (0, 1) | (1, 1) | (1, 1)
| (1, 1) ⟩
| (1, 1) ⟩ 𝒩 | (0, 1) ⟩ | (2, 0) ⟩
𝒩
𝒩 𝒩 𝒩
(0, −1)
(0, −2) (0, −2)
(0, −2) (0, 0) (−2, 0)
𝒫
Figure 3.7: Illustration from [5]: Winning strategy for Next player in Quantum Nim (2, 2), showing
that quantum moves impact the outcome of the game. Only one option from (2, 2) is shown because
it is a winning move. (There are four additional move options from ⟨(1, 2) | (2, 1)⟩ that are not shown
because they are symmetric to moves given.)
Quantum interactions between moves and positions, as demonstrated in [18, 10], can
have a significant impact on Nimber Arithmetic. In addition, as shown in [5], quantum
moves can also fundamentally impact the complexity of combinatorial games.
5 3 0 4 2 2
1 3 3 2 1 0
( ). (3.1)
0 0 4 6 5 7
4 2 5 0 1 2
For example, the move (0, 0, 0, 0, −2, 0) applied to the quantum Nim position in
equation (3.1) collapses realizations 2 and 4, and transforms realizations 1 and 3, ac-
cording to Nim as in the following matrices:
5 3 0 4 {2 − 2} 2
⊠ ⊠ ⊠ ⊠ ⊠ ⊠ 5 3 0 4 0 2
( )=( ).
0 0 4 6 {5 − 2} 7 0 0 4 6 3 7
⊠ ⊠ ⊠ ⊠ ⊠ ⊠
Proposition 4 (QCGT Connection of Transverse Wave). Let Boolean Nim denote Nim
in which each heap has either one or zero pebbles. Demi-Quantum Boolean Nim is
isomorphic to Crosswise AND and hence isomorphic to Transverse Wave.
Proof. The proof uses the following equivalent “numerical interpretation” of collapses
in the (demi-)quantum generalization of Nim. When a realization collapses, we can ei-
ther remove it from the superposition or replace it with a Nim position in which all piles
have zero pebbles. For example, the following two Nim superpositions are equivalent
Transverse Wave: an impartial color-propagation game | 55
5 3 0 4 0 2 5 3 0 4 0 2
⊠ ⊠ ⊠ ⊠ ⊠ ⊠ 0 0 0 0 0 0
( )≡( ).
0 0 4 6 3 7 0 0 4 6 3 7
⊠ ⊠ ⊠ ⊠ ⊠ ⊠ 0 0 0 0 0 0
In Boolean Nim, each move can only remove one pebble from a pile. So we
can simplify the specification of the move by the index i alone. Note also that each
quantum Boolean Nim position can be specified by a Boolean matrix. Let B denote
the Boolean matrix of Demi-Quantum-Boolean-Nim under consideration. With the
above numerical interpretation of collapses in demi-quantum generalization, the col-
lapse of a realization of B when applying a move i corresponding to the case that the
corresponding row in B has 0 at the ith entry, and the row is replaced by the crosswise
AND with that column selection. Thus Demi-Quantum Boolean Nim with position B
is isomorphic to Crosswise AND with position B.
As an aside, notice that positions with all green tiles are trivial, so they are not ap-
propriate starting positions. Interesting games need to be primed with some arbitrary
purple tiles. Thus the hard positions given by the reduction can be natural starting po-
sitions, and the hardness statement is particularly meaningful for Transverse Wave.
Ruleset 7 (Avoid True). A game position of Avoid True is defined by a positive CNF F
(and of a set of or-clauses of only positive variables) over a ground set V and a subset
T ⊂ V, the “true” variables (which is usually the empty set at the beginning of the
game).
A turn consists of selecting one variable from V \ T, where a variable x ∈ V \ T is
feasible for position (F, V, T) if assigning all variables in T ∪ {x} to true does not make
F true. If x is feasible, then the position resulting from that move is (F, V, T ∪ {x}).
Under normal play, the next player loses if the position has no feasible move.
Theorem 4 ([5]). Demi-Quantum Boolean Nim and Avoid True are isomorphic games.
Proof. The part of the proof in [5] showing that Quantum Nim is Σ2 -hard also estab-
lishes the above theorem. Because establishing this theorem is not the main focus of
[5], we reformulate the proof here to make this theorem more explicit and to provide
a complete background of our discussion in this section.
We first establish the direction from Demi-Quantum Boolean Nim to Avoid True.
Given a position 𝔹 in Demi-Quantum Boolean Nim, we can create an or-clause from
each realization in 𝔹. Suppose 𝔹 has m realizations and n piles. We introduce n
Boolean variables V = {x1 , . . . , xn }. For each realization in 𝔹, the or-clause consists
of all variables corresponding to piles with zero pebbles. The reduced CNF F𝔹 is the
and of all these or-clauses. Taking a pebble from a pile collapses a realization for
which the pile has no pebble. Such a move is mapped to selecting the corresponding
Boolean variable making the or-clause associated with the realization true. Thus
playing Demi-Quantum Boolean Nim at position 𝔹 is isomorphic to playing Avoid
True starting at position (F𝔹 , V, 0). Note that the reduction can be set up in polynomial
time.
Transverse Wave: an impartial color-propagation game | 57
For example, consider the following Demi-Quantum Boolean Nim position (with
heaps (columns) labeled by their indices and realizations (rows) labeled A, B, and C):
1 2 3 4 5 6 7
1 0 0 1 1 0 1 A
( )
0 1 0 1 1 1 0 B
1 0 0 1 1 1 0 C
(x2 ∨ x3 ∨ x6 ) ∧ ⏟⏟⏟⏟⏟⏟⏟⏟⏟⏟⏟⏟⏟⏟⏟⏟⏟⏟⏟⏟⏟⏟⏟⏟⏟
⏟⏟⏟⏟⏟⏟⏟⏟⏟⏟⏟⏟⏟⏟⏟⏟⏟⏟⏟⏟⏟⏟⏟⏟⏟ (x1 ∨ x3 ∨ x7 ) ∧ (x 2 ∨ x3 ∨ x7 )
⏟⏟⏟⏟⏟⏟⏟⏟⏟⏟⏟⏟⏟⏟⏟⏟⏟⏟⏟⏟⏟⏟⏟⏟⏟
A B C
and T = 0. The three clauses are labeled by their respective realization. Those variables
that appear in each clause are those with a zero in the matrix. Notice the following
properties.
– The third heap is empty in all Nim realizations, so no player can legally play there.
That is the same in the resulting Avoid True position; no player can pick x3 as it
is in all clauses and would make the formula true.
– The fourth and fifth heaps have a pebble in all three realizations in Nim, so a player
can play in either of them without any collapses. Because of this, those Boolean
variables do not occur in any of the Avoid True clauses.
For the reverse direction, consider an Avoid True position (F, V, T). Assume that V =
{x1 , . . . , xn } and F has m clauses C1 , . . . , Cm . We reduce it to a Boolean Nim superpo-
sition 𝔹(F,V,T) with m realizations and n piles. In the realization for Ci , we set piles
corresponding to variables in Ci to zero to set up the mapping between collapsing the
realization with making the clause true. We also set all piles associated with variables
in T to zero to set up the mapping between collapsing the realization with selecting a
selected variable. Again, we can use these two mappings to inductively establish that
the game tree for Demi-Quantum Boolean Nim at 𝔹(F,V,T) is isomorphic to the game
tree for Avoid True at (F, V, T). Note that the reduction also runs in polynomial time.
We demonstrate this reduction on the following Avoid True position with formula
(x1 ∨ x2 ∨ x3 ∨ x4 ) ∧ ⏟⏟⏟⏟⏟⏟⏟⏟⏟⏟⏟⏟⏟⏟⏟⏟⏟⏟⏟⏟⏟⏟⏟⏟⏟⏟⏟⏟⏟⏟⏟⏟⏟
⏟⏟⏟⏟⏟⏟⏟⏟⏟⏟⏟⏟⏟⏟⏟⏟⏟⏟⏟⏟⏟⏟⏟⏟⏟⏟⏟⏟⏟⏟⏟⏟⏟ (x1 ∨ x5 ∨ x6 ∨ x7 ) ∧ (x 1 ∨ x3 ∨ x6 ) ∧ (x
⏟⏟⏟⏟⏟⏟⏟⏟⏟⏟⏟⏟⏟⏟⏟⏟⏟⏟⏟⏟⏟⏟⏟⏟⏟ 2 ∨ x5 ∨ x8 )
⏟⏟⏟⏟⏟⏟⏟⏟⏟⏟⏟⏟⏟⏟⏟⏟⏟⏟⏟⏟⏟⏟⏟⏟⏟
A B C D
and already-chosen variables T = {x8 }. Following the reduction, we produce the fol-
lowing Demi-Quantum Boolean Nim position:
1 2 3 4 5 6 7 8
0 0 0 0 1 1 1 0 A
( ).
0 1 1 1 0 0 0 0 B
0 1 0 1 1 0 1 0 C
58 | K. Burke et al.
Since x8 has already been made true (x8 ∈ T), the following properties hold.
– The eighth column is all zeroes.
– Since x8 appears in clause D, that clause does not have a corresponding realization
in the quantum superposition (i. e., a row in the matrix).
Because both Power Station and Hydropipe are simply graph-theoretical inter-
pretations of the proof for Theorem 4, the proof also provides the following corollary.
Corollary 1. Power Station and Hydropipe are isomorphic to Transverse Wave and
are therefore PSPACE-complete combinatorial games.
influencers try to target their ads at demographic groups to influence the population.
People can be influenced either by influencers’ ads or by “enthusiastic endorsement”
cascading through friend circles.
The following combinatorial game is distilled from the above scenario.
A player’s turn consists of choosing a demographic Dk and the amount they want to
influence, c > 0 where there exists v ∈ Dk such that Θ(v) ≥ c. (Since c > 0, there must
be an uninfluenced member of Dk .)
– Θ(v) decreases by c (all individuals are influenced by c).
– If Θ(v) became negative by this subtraction (if it went from ℤ+ ∪ {0} to ℤ− ), then
Θ(x) = −1 for all x ∈ NG (v).8
Importantly, we perform all the subtractions and determine which individuals are
newly strongly influenced before they go and strongly influence their friends. We in-
clude an example move in Figure 3.8.
Note that when influencing a demographic Dk , since c cannot be greater than the
highest threshold, the highest-threshold individual will not be strongly influenced by
the subtraction step. (If one of their neighbors does get strongly influenced, then they
will be strongly influenced in that manner.)
Since a player needs to make a move on a demographic group with at least one
uninfluenced individual, the game ends when there are no remaining groups to influ-
ence.
The following theorem shows that Demographic Influence generalizes Demi-
Quantum Nim.
−1 0 −1 0 −1 0
v1 v1 v1
-2 7 -2 3 -1 3
2 2 2
2 v2 4 -2 v2 4 -2 v2 4
3 4 3 0 -1 0
6 6 -1
v3 v3 v3
a b c
Figure 3.8: A Demographic Influence move, influencing D3 = {v1 , v2 , v3 } by 4. Panel (a) shows G, Θ,
and D3 prior to making the move. Panel (b) shows the first part of the move: subtracting from the
thresholds of v1 , v2 , and v3 . Panel (c) shows the final results of the move: since v2 went negative, its
neighbors are set to −1 to show that they have been strongly influenced as well. (The magnitude of
negativity does not matter, so it is okay that the vertex at −2 “goes back” to −1.)
Proof. For every Demi-Quantum Nim instance Z with m realizations of n piles, we con-
struct the following Demographic Influence instance Z ′ , in which (1) V = {(r, c) |
r ∈ [m], c ∈ [n]}, (2) E = {((r1 , c1 ), (r2 , c2 )) | r1 = r2 } (i. e., vertices from all piles from the
same realization are a clique), (3) for all (r, c) ∈ V, Θ((r, c)) is set to be the number of
pebbles that the cth pile has in the rth realization of Nim, (4) D = (D1 , . . . , Dn ), where
Dc = {(r, c) | r ∈ [m]}, i. e., nodes associated with the cth Nim pile.
We claim that Z and Z ′ are isomorphic games. Imagine the two games are played in
tandem. Suppose the player in Demi-Quantum Nim Z makes move (k, q), removing q
pebbles from pile k. In its Demographic Influence counterpart Z ′ , the corresponding
player also plays (k, q), investing q units in demographic group k. Note that in Z, (k, q)
is feasible if and only if in at least one of the realizations, the kth Nim pile has at least
q pebbles. This is same as q ≤ maxi∈[m] Θ((i, q)). Therefore (k, q) is feasible in Z if and
only if (k, q) is feasible in Z ′ .
When (k, q) is feasible, then for any realization i ∈ [m], there are three cases: (1) if
the kth Nim pile has more pebbles than q, then in that realization a classical transi-
tion f is made, and the q pebbles are removed from the pile. This corresponds to the
reduction of the threshold at node (i, k) by q; (2) if the kth Nim pile has exactly q peb-
bles, then all pebbles are removed from the pile. This corresponds to the case where
node (i, k) becomes weakly influenced; (3) if q is more than the number of pebbles in
the kth Nim pile, then the move collapses realization i. This corresponds to the case in
Demographic Influence, where (i, k) become strongly influenced and then strongly
Transverse Wave: an impartial color-propagation game | 61
influence all other vertices in the same row i. Therefore Z and Z ′ are isomorphic games
with a connection between the collapse of a realization in the quantum version and
the cascading of influence by endorsement in friend circle.
x1 x1 0 1 tx1 Dx1
v v 0 1 tv Dv
x2 x2 0 1 tx2 Dx2
Figure 3.9: An example of the reduction from Node Kayles to Demographic Influence.
Proof. For a Friend Circle position, Z = (G, S, w), where G = (V, E), we construct
Demographic Influence instance Z ′ = (G′ , D, Θ) using the following properties.
– For each edge e ∈ E, we create a new vertex ve . Then V ′ = {ve | e ∈ E}.
– E ′ = {(ve1 , ve2 ) | there exists v ∈ V : e1 , e2 both incident to v}.
– G′ = (V ′ , E ′ ).
– D = {Ds }s∈S , where Ds = {ve | e is incident to s}.
– Θ : V ′ → {0, 1}, where if w(e) = f, then Θ(e) = 1; otherwise, if w(e) = t, then
Θ(e) = 0.
In other words, the connections are built on the line graph of the underlying graph
in Friend Circle. Each seed vertex defines the demographic group and associates
with all edges incident to it. Targeting this demographic group influences all these
edges and edges adjacent to t-edges in this set. See Figure 3.10 for an example of the
reduction.
We complete the proof by showing that a play on Friend Circle position s is
isomorphic to playing (Ds , 1) on Demographic Influence, meaning choosing demo-
graphic Ds and investing c = 1. In Friend Circle, playing at s means that for all e
incident to s: (1) w(e) becomes t, and (2) if w(e) was already t, then for all f adjacent
to e, w(f ) becomes t.
In Demographic Influence, the corresponding play (Ds , 1) means that for all
ve ∈ Ds :
– Θ(ve ) is reduced by 1, which corresponds to setting w(e) to t;
– if Θ(ve ) becomes −1, then for all vf ∈ NG′ (ve ), we have that Θ(vf ) also becomes −1;
by our definition of E ′ these vf are exactly those where both w(e) were previously
t and e and f are adjacent in G.
Thus, following analogous moves, w(e) = t if and only if Θ(ve ) ≤ 0. A seed vertex s′ ∈ S
is surrounded by t-edges (and ineligible as a move) exactly when all vertices ve ∈ Ds
Transverse Wave: an impartial color-propagation game | 63
e4
Ds1 1 Dx
s1 f
x
e4 e1 e3
1 1
f e1 f e3
s2 t s3 Ds2 0 Ds3
e2
e2
Figure 3.10: Example of the reduction. On the left, there is a Friend Circle instance. On the right,
there is the resulting Demographic Influence position.
are influenced, also making Ds ineligible as a move. This mapping of moves shows
that the games are isomorphic.
9 “Change the Game!” is the title of a section in Lessons in Play, Chapter 1, “Basic Techniques” [1].
64 | K. Burke et al.
– Demi-Quantum Nim is a generalization of Nim. (See Section 3.) (This is true of any
ruleset R and Demi-Quantum R.)
– Demi-Quantum Nim is a generalization of Transverse Wave. (Via Demi-Quan-
tum Boolean Nim, see Section 3.2.)
– Friend Circle is a generalization of both Transverse Wave (Section 2.3) and
Node Kayles (Section 2.2).
– Demographic Influence is a generalization of both Demi-Quantum-Nim and
Friend Circle (Section 4.2).
Demographic
Influence
Demi- Friend
Quantum Nim Circle
Transverse Node
Nim
Wave Kayles
Figure 3.11: Generalization relationships of the rulesets in this paper. A → B means that A is a
particular case of B and B is a generalization of A.
Furthermore, several of the relationships outside of Figure 3.11 that were discussed in
this paper were completely isomorphic, preserving the game values, not just winnabil-
ity (as in Section 1.4). More explicitly, any new findings on the Grundy values for
Transverse Wave also give those exact same results for Crosswise OR, Crosswise
AND, Demi-Quantum Boolean Nim, Avoid True, Power Station, and Hydropipe.
We have been drawn to Transverse Wave not only because it is colorful, ap-
proachable, and intriguing, but also because the relationships with other games have
inspired us to discover more connections among games. Our work offers us a glimpse
of the lattice order induced by special-case/generalization relationships over mathe-
matical games, which we believe is an instrumental framework for both the design and
comparative analysis of combinatorial games. In one direction of this lattice, when
given two combinatorial games A and B, it is a stimulating and creative process to de-
Transverse Wave: an impartial color-propagation game | 65
sign a game with the simplest ruleset that generalizes both A and B.10 For example,
in generalizing both Nim and Undirected Geography, Neighboring Nim highlights
the role of “self-loops” in Graph-Nim. In our work the aim of capturing both Node
Kayles and Demi-Quantum Nim has contributed to our design of Demographic In-
fluence. In the other direction, identifying a well-formulated basic game at the in-
tersection of two seemingly unconnected games may greatly expand our understand-
ing of game structures. It is also a refinement process for identifying intrinsic build-
ing blocks and fundamental games. By exploring the lattice order of game relation-
ships we will continue to improve our understanding of combinatorial game theory
and identify new fundamental games inspired by the rapidly evolving world of data,
networks, and computing.
Bibliography
[1] M. H. Albert, R. J. Nowakowski, and D. Wolfe, Lessons in Play: An Introduction to Combinatorial
Game Theory, A. K. Peters, Wellesley, Massachusetts, 2007.
[2] G. Beaulieu, K. G. Burke, and É. Duchêne, Impartial coloring games, Theoret. Comput. Sci. 485
(2013), 49–60.
[3] E. R. Berlekamp, J. H. Conway, and R. K. Guy, Winning Ways for your Mathematical Plays,
volume 1, A. K. Peters, Wellesley, Massachusetts, 2001.
[4] C. L. Bouton, Nim, a game with a complete mathematical theory, Annals of Mathematics 3(1/4)
(1901), 35–39.
[5] K. Burke, M. Ferland, and S.-H. Teng, Quantum combinatorial games: structures and
computational complexity, CoRR abs/2011.03704, 2020.
[6] K. W. Burke and O. George, A PSPACE-complete graph Nim, in Games of No Chance 5, (2019),
259–270.
[7] K. W. Burke and S.-H. Teng, Atropos: a PSPACE-complete Sperner triangle game, Internet
Mathematics 5(4) (2008), 477–492.
[8] K. W. Burke, Science for Fun: New Impartial Board Games, PhD thesis, USA, 2009.
[9] W. Chen, S.-H. Teng, and H. Zhang, A graph-theoretical basis of stochastic-cascading network
influence: characterizations of influence-based centrality, Theor. Comput. Sci. 824-825 (2020),
92–111.
[10] P. Dorbec and M. Mhalla, Toward quantum combinatorial games, arXiv preprint
arXiv:1701.02193, 2017.
[11] D. Eppstein, Computational complexity of games and puzzles, 2006, http://www.ics.uci.edu/
~eppstein/cgt/hard.html.
[12] S. Even and R. E. Tarjan, A combinatorial problem which is complete in polynomial space,
J. ACM 23(4) (1976), 710–719.
[13] A. S. Fraenkel and D. Lichtenstein, Computing a perfect strategy for n × n chess requires time
exponential in n, J. Comb. Theory, Ser. A 31(2) (1981), 199–214.
10 It is also a relevant pedagogical question to ask when introducing students to combinatorial game
theory.
66 | K. Burke et al.
[14] A. S. Fraenkel, E. R. Scheinerman, and D. Ullman, Undirected edge geography, Theor. Comput.
Sci. 112(2) (1993), 371–381.
[15] M. Fukuyama, A Nim game played on graphs, Theor. Comput. Sci. 1-3(304) (2003), 387-399.
[16] D. Gale, A curious Nim-type game, American Mathematical Monthly 81 (1974), 876–879.
[17] D. Gale, The game of hex and the Brouwer fixed-point theorem, American Mathematical
Monthly 10 (1979), 818–827.
[18] A. Goff, Quantum tic-tac-toe: a teaching metaphor for superposition in quantum mechanics,
American Journal of Physics 74(11) (2006), 962–973.
[19] P. M. Grundy, Mathematics and games, Eureka 2 (1939), 198–211.
[20] D. Kempe, J. Kleinberg, and E. Tardos, Maximizing the spread of influence through a social
network, in Proceedings of the 9th ACM SIGKDD International Conference on Knowledge
Discovery and Data Mining, KDD ’03 (2003), 137–146.
[21] D. Lichtenstein and M. Sipser, Go is polynomial-space hard, J. ACM 27(2) (1980), 393–401.
[22] J. F. Nash, Some Games and Machines for Playing Them, RAND Corporation, Santa Monica, CA
1952.
[23] S. Reisch, Hex ist PSPACE-vollständig, Acta Inf. 15 (1981), 167–191.
[24] M. Richardson and P. Domingos, Mining knowledge-sharing sites for viral marketing, In
Proceedings of the 8th ACM SIGKDD International Conference on Knowledge Discovery and
Data Mining, KDD ’02 (2002), 61–70.
[25] T. J. Schaefer, On the complexity of some two-person perfect-information games, Journal of
Computer and System Sciences 16(2) (1978), 185–225.
[26] R. P. Sprague, Über mathematische Kampfspiele, Tôhoku Mathematical Journal 41 (1935-36),
438–444.
[27] G. Stockman, Presentation: The game of Nim on graphs: NimG, 2004. Available at http:
//www.aladdin.cs.cmu.edu/reu/mini_probes/papers/final_stockman.ppt.
Alda Carvalho, Melissa A. Huggan, Richard J. Nowakowski, and
Carlos Pereira dos Santos
A note on numbers
Abstract: When are all positions of a game numbers? We show that two properties
are necessary and sufficient. These properties are consequences of the fact that, in
a number, it is not an advantage to be the first player. One of these properties implies
the other. However, checking for one or the other, rather than just one, can often be
accomplished by only looking at the positions on the “board”. If the stronger property
holds for all positions, then the values are integers.
1 Introduction
When analyzing games, an early question is: is it possible that all the positions are
numbers? If that is true, then it is easy to determine the outcome of a disjunctive sum
of positions; just add up the numbers. It is also easy to find the best move; just play
the summand with the largest denominator. The problem is how to recognize when all
the positions are numbers.
Siegel [8, Exercise 3.15] states “If every incentive of G is negative, then G is a num-
ber.” This does not provide much insight or intuition. In fact, in most non-all-small-
games, there are nonzero positions, some of which are numbers and others not. Let
S be a set of positions of a ruleset. It is called a hereditarily closed set of positions of
a ruleset (HCR) if it is closed under taking options. These HCR sets are the natural
objects to consider.
https://doi.org/10.1515/9783110755411-004
68 | A. Carvalho et al.
There are two properties either of which, if satisfied for all followers of a position,
tells us that the position is a number. Both are aspects of the first-move-disadvantage
in numbers. The first is a comparison with one move against two moves. Consider a
pair of moves for both players (GL , GR ). The property indicates that it is not because
Right chooses GR that Left misses the opportunity to choose GL . After Right’s move to
GR , she has an option, which is at least as good as GL again. This idea of “no loss” is
also present in Definition 1.2.
Definition 1.1 (F1 Property). Let S be an HCR. Given G ∈ S, the pair (GL , GR ) ∈ Gℒ × Gℛ
satisfies the F1 property if there is GRL ∈ GRℒ such that GRL ⩾ GL or there is GLR ∈ GLℛ
such that GLR ⩽ GR .
Definition 1.2 (F2 Property). Let S be an HCR. Given G ∈ S, (GL , GR ) ∈ Gℒ ×Gℛ satisfies
the F2 property if there are GLR ∈ GLℛ and GRL ∈ GRℒ such that GRL ⩾ GLR .
In many games the literal form of positions will tell us whether they satisfy the F1
property or the F2 property with equality. See Section 2 for examples.
When analyzing a new game, there are several results that can be used to gain an
insight into its structure. The next two should be included in this list. If every position
satisfies either property, then the values are numbers. If every position satisfies the F2
property, then the result is stronger. These have already been used in [3] and would
have helped when writing [1, 4]; see the examples in Section 2.
Lemma 3.2. Let S be an HCR. If for any position G ∈ S, all pairs (GL , GR ) ∈ Gℒ × Gℛ
satisfy the F1 property or the F2 property, then all positions G ∈ S are numbers.
Lemma 3.5. Let S be an HCR. If for any position G ∈ S, all pairs (GL , GR ) ∈ Gℒ × Gℛ
satisfy the F2 property, then all positions G ∈ S are integers.
Lemma 3.3 proves that the F2 property implies the F1 property. In practice, how-
ever, it is often easier to prove that at least one of the two properties hold rather than
trying to prove that just the F1 property holds.
We recall the results about numbers needed for this paper.
Theorem 1.3 ([1, 2, 8]). Let G be a number whose options are numbers.
1. After removing dominated options, the form of G has at most one Left option and at
most one Right option.
2. For the options that exist, GL < G < GR .
3. If there is an integer k such that GL < k < GR or if either GL or GR does not exist,
then G is an integer.
4. If both GL and GR exist and the previous case does not apply, then G is the simplest
number between GL and GR .
A note on numbers | 69
The most important point to remember is item 2, that is, when a player plays in a
number the situation gets worse for them. This has an important consequence when
games are being analyzed.
Theorem 1.4 (Number Avoidance Theorem [1, 2, 8]). Suppose that G is a number and H
is not. If Left can win moving first on G + H, then Left can do so with a move on H.
In many cases, when checking the properties, the Left and Right options will re-
fer to two specific moves on the “game board”, one by Left and one by Right. If this
happens, then the actual positions will automatically give the stronger conditions,
GLR ≅ GRL or GRL ≅ GL . Moreover, no calculations are required. Examples are given in
Section 2.
Example 2.1. Let G be a polychromatic chomp position. Let ℓ and r be black and
gray squares, respectively. If neither Gℓ nor Gr eliminates the other, then playing both
moves in either order, results in the same position Q, i. e., Gℓr ≅ Grℓ . Suppose Gℓ elimi-
nates Gr , as illustrated in Figure 4.1. In this case, Left can play her move before or after
Right’s move, i. e., Gℓ ≅ Grℓ .
Hence all (GL , GR ) satisfy one or both properties. Therefore by Lemma 3.2 all poly-
chromatic chomp positions are numbers.
Example 2.2. Let G be a blue-red-hackenbush string, and let ℓ and r be the edges
played by Left and Right, respectively. If r is higher up the string than ℓ, then playing
ℓ eliminates r. Thus Grℓ and Gℓ are identical. Otherwise, playing r eliminates ℓ, and
Gℓr ≅ Gr .
Hence by Lemma 3.2 all blue-red-hackenbush strings are numbers.
Example 2.3. Consider an m × n cutcake position. The moves are not independent
but almost so. For given ℓ and r, consider the pair of options, GL = m × (n − ℓ) + m × ℓ
70 | A. Carvalho et al.
GLR = m × (n − ℓ) + (m − r) × ℓ + r × ℓ,
GRL = (m − r) × n + r × (n − ℓ) + r × ℓ.
The moves cannot be interchanged and get the same board position. However, we
know that if i > j, then k × i ⩾ k × j (intuitively, there are more moves for Left in k × i
than in k × j), and similarly i × k ⩽ j × k. The terms of GLR and GRL pair off: r × ℓ is in
both, (m − r) × ℓ ⩽ (m − r) × n and m × (n − ℓ) ⩽ r × (n − ℓ). Therefore GLR ⩽ GRL , and so
(GL , GR ) satisfies the F2 property. Therefore by Theorem 3.5 G is an integer.
Example 2.4.
1. Given a shove position G, if any token is pushed off the end of the strip, then
(Gℓ , Gr ) satisfies the F1 property; if not, then (Gℓ , Gr ) satisfies the F2 property.
2. Given a push position G, if any token pushes the other, then (Gℓ , Gr ) satisfies the
F1 property; if not, then (Gℓ , Gr ) satisfies the F2 property.
3. Given a lenres position G, if any digit in the move replaces the other, then (Gℓ , Gr )
satisfies the F1 property; if not, then (Gℓ , Gr ) satisfies the F2 property.
4. Given a domino shave position G, (GL , GR ) ∈ Gℒ × Gℛ satisfies the F1 property.
5. Given a divisors position G = (l, r) and a pair of options, (GL , GR ) = ((ℓ′ , r), (ℓ, r ′ )),
if ℓ′ = r or ℓ = r ′ , then (GL , GR ) satisfies the F1 property; if not, then (GL , GR )
satisfies the F2 property.
A note on numbers | 71
Even more can be said about blue-red-cherries [1] and erosion [1]. These games
all satisfy the F2 property, and thus by Theorem 3.5 they are integers.
Example 2.5.
1. Given a blue-red-cherries position G, all (GL , GR ) ∈ Gℒ × Gℛ satisfy the F2 prop-
erty. If Left removes a cherry from one end (ℓ) and Right removes a cherry from the
other end (r), then Gℓr ≅ Grℓ .
2. Given an erosion position G, all (GL , GR ) ∈ Gℒ ×Gℛ satisfy the F2 property. This is
vacuously true since by the rules it is impossible for both players to have options
at the same time.
For the values all to be numbers, the properties must always be true. It is not suf-
ficient for most of the positions to satisfy them. Two games that have 𝒩 -positions but
where many of the positions naturally satisfy one or the other property are the follow-
ing.
1. F1: In partizan euclid [5] with G = (p, q) and p > 2q, GLR = GR or GRL = GL .
2. F2: In the partizan subtraction subset of splittles [6], let a be the largest that
can be taken. Suppose the heap size is n ⩾ 2a; then Left taking ℓ and Right taking
r results in a heap of size n − ℓ − r regardless of the order. Thus Gℓr ≅ Grℓ .
3 Proofs
Theorem 3.1 is the central theoretical result: All the positions in an HCR set are
numbers if and only if there is no position and no number such that the sum is
an 𝒩 -position. This is all that is required to prove Lemma 3.2. Lemma 3.3 shows that
the F2 property implies the F1 property with strict inequality. In fact, Lemma 3.2 may
be written as a necessary and sufficient condition. This is Theorem 3.4, which only
uses the F1 property. Theorem 3.5 shows that if every position satisfies the F2 property,
then the numbers will be integers.
Theorem 3.1 (Outcomes and numbers). Let S be an HCR. All positions G ∈ S are num-
bers if and only if there is no G ∈ S and a number x such that G + x ∈ 𝒩 .
Proof. (⇒) If all positions G ∈ S are numbers, then, regardless of what the numbers x
are, all G + x are numbers. Hence there is no G ∈ S and a number x such that G + x ∈ 𝒩 .
(⇐) Let G ∈ S. If Gℒ = 0 or Gℛ = 0, then G is an integer. Suppose that Gℒ ≠ 0 and
Gℛ ≠ 0. By induction, since S is hereditarily closed, all GL ∈ Gℒ and GR ∈ Gℛ are
numbers. Hence, after removing dominated options, there are three possible cases:
72 | A. Carvalho et al.
Lemma 3.2. Let S be an HCR. If for any position G ∈ S, all pairs (GL , GR ) ∈ Gℒ × Gℛ
satisfy the F1 property or the F2 property, then all positions G ∈ S are numbers.
Proof. By Theorem 3.1 it suffices to prove that if all pairs (GL , GR ) ∈ Gℒ × Gℛ satisfy
the F1 property or the F2 property, then there is no G ∈ S and a number x such that
G + x ∈ 𝒩.
For the contrapositive, suppose that there are a position G ∈ S and a number x
such that G + x ∈ 𝒩 . Assume that the birthday of G in such conditions is the smallest
possible.
Since G + x ∈ 𝒩 , there are GL + x ⩾ 0 and GR + x ⩽ 0 (Theorem 1.4). Due to the
hypothesis, the pair (GL , GR ) satisfies the F1 property or the F2 property. If the pair
satisfies the F1 property, then there is GRL such that GRL ⩾ GL , or there is GLR such
that GLR ⩽ GR . If the first happens, then GRL ⩾ GL implies GRL + x ⩾ GL + x ⩾ 0.
That is incompatible with GR + x ⩽ 0. If the second happens, then GLR ⩽ GR implies
GLR + x ⩽ GR + x ⩽ 0. That is incompatible with GL + x ⩾ 0. In either case, we have a
contradiction; the pair (GL , GR ) cannot satisfy the F1 property.
Hence the pair (GL , GR ) satisfies the F2 property, and there are GLR ∈ S and GRL ∈ S
such that GLR ⩽ GRL . Since GL + x ⩾ 0, we have GLR + x ⩽̸ 0. Also, since GR + x ⩽ 0,
we have GRL + x ⩾̸ 0. The second inequality allows us to conclude that GLR + x ⩾̸ 0
because GLR ⩽ GRL . However, GLR + x ⩽̸ 0 and GLR + x ⩾̸ 0, implies that GLR + x ∈ 𝒩 ,
contradicting the smallest rank assumption. Therefore the pair (GL , GR ) cannot satisfy
the F2 property.
The pair (GL , GR ) does not satisfy the F1 property or the F2 property, which contra-
dicts the hypothesis. There is no G ∈ S and number x such that G + x ∈ 𝒩 . Therefore
all positions G ∈ S are numbers.
It is possible to have a pair of options that satisfies the F2 property without satis-
fying the F1 property; an example of that is a pair like (GL , GR ) = ({0, ∗ | ∗}, {∗ | 0, ∗}).
However, the options of ∗ do not satisfy the F2 property. On the other hand, if all fol-
lowers also satisfy the F2 property, then the next lemma shows that all pairs satisfy
the F1 property.
A note on numbers | 73
Lemma 3.3. Let S be an HCR such that, given any position G ∈ S, all pairs (GL , GR ) ∈
Gℒ × Gℛ satisfy the F2 property. Then all pairs satisfy the F1 property.
Theorem 3.4. Let S be an HCR. All positions G ∈ S are numbers if and only if for any
position G ∈ S, all pairs (GL , GR ) ∈ Gℒ × Gℛ satisfy the F1 property.
GLR with GLR − GR ⩽ 0 or some GRL with GL − GRL ⩽ 0. Hence GLR ⩽ GR or GL ⩽ GRL ,
and thus by definition (GL , GR ) satisfies the F1 property.
By Lemma 3.3, if all pairs (GL , GR ) satisfy the F1 property or the F2 property, then
all pairs satisfy the F1 property. That means that a pair satisfying the F2 property also
satisfies the F1 property. However, observe that the opposite is not true: it is possible to
have a pair satisfying the F1 property without satisfying the F2 property. For example,
if G = 21 = {0 | 1} (canonical form), then the pair (0, 1) satisfies the F1 property and does
not satisfy the F2 property because GLℛ = 0. The F2 property is a stronger condition
and has a surprising consequence.
Theorem 3.5. Let S be an HCR. If for any position G ∈ S, all pairs (GL , GR ) ∈ Gℒ × Gℛ
satisfy the F2 property, then all positions G ∈ S are integers.
Observation 3.6. Theorem 3.5 exhibits a sufficient but not necessary condition. Con-
sider S, an HCR whose game forms are {−2 | 0}, −2, −1, and 0 (the last three canonical
forms). Of course, all game values of S are integers. However, regarding G = {−2 | 0},
the pair (GL , GR ) = (−2, 0) does not satisfy the F2 property.
74 | A. Carvalho et al.
Appendix. Rulesets
divisors
Moves: Left is allowed to replace (l, r) by (l′ , r), where l′ < l is a divisor of r. Right is
allowed to replace (l, r) by (l, r ′ ), where r ′ < r is a divisor of l.
L R L
(5, 4) → (2, 4) → (2, 1) → (1, 1)
Moves: Left is allowed to choose two upside-down turtles and turn them onto its feet.
Right is also allowed to choose a pair of turtles, provided that the leftmost is on its feet
and the other is on its back; his move is turning over both turtles.
polychromatic chomp
Position: A grid with one poison square in the lower left corner. Besides the poison
square, each square is either black or gray.
Moves: On her turn, Left chooses a black square and removes it and all other squares
above or to the right of it. On his turn, Right moves analogously, but he has to choose
a gray square.
Bibliography
[1] M. H. Albert, R. J. Nowakowski, and D. Wolfe, Lessons in Play: An Introduction to Combinatorial
Game Theory, A. K. Peters, Wellesley, MA, 2007.
A note on numbers | 75
[2] E. R. Berlekamp, J. H. Conway, and R. K. Guy, Winning Ways for Your Mathematical Plays,
Academic Press, London, 1982.
[3] A. Bonato, M. A. Huggan, and R. J. Nowakowski, The game of Flipping Coins, arXiv:2102.13225.
[4] A. Carvalho, M. A. Huggan, R. J. Nowakowski, and C. P. dos Santos, Ordinal Sums, Clockwise
Hackenbush, and Domino Shave, Integers, to appear.
[5] N. A. McKay and R. J. Nowakowski, Outcomes of partizan Euclid, in Integers 12B (2012/13):
Proceedings of the Integers Conference 2011, Paper No. A9, 15 pp.
[6] G. A. Mesdal, Partizan Splittles, in Games of No Chance 3, Cambridge Univ. Press, 2009,
447–461.
[7] A. A. Siegel, On the Structure of Games and their Posets, Ph. D. thesis, Dalhousie University,
2011.
[8] A. N. Siegel, Combinatorial Game Theory, American Math. Soc., Providence, RI, 2013.
[9] T. van Roode, Partizan Forms of Hackenbush Combinatorial Games, M. Sc. Thesis, University of
Calgary, 2002.
Alda Carvalho, Melissa A. Huggan, Richard J. Nowakowski, and
Carlos Pereira dos Santos
Ordinal sums, clockwise hackenbush, and
domino shave
Dedicated to Elwyn R. Berlekamp, John H. Conway, and Richard K. Guy; they taught us so much
1 Introduction
Hackenbush is a central game in Winning Ways [4]. It has many interesting properties.
The relationship between the ordinal sum decomposition and the valuation scheme
for paths and trees is central in this paper. The literature also includes variants with
new intriguing properties in new contexts. For example, yellow-brown hackenbush
[3] and all-small games, hackenbush sprigs [12] and misère games, and toppling
dominoes [7] and hot games.
Acknowledgement: Alda Carvalho is a CEMAPRE member and has the support of Project CEMAPRE –
UID/MULTI/00491/2019 financed by FCT/MCTES through national funds.
Melissa A. Huggan was supported by the Natural Sciences and Engineering Research Council of Canada
(funding reference number PDF-532564-2019).
Richard J. Nowakowski was supported by the Natural Sciences and Engineering Research Council of
Canada (funding reference number 2019-04914).
Carlos Santos is a CEAFEL member and has the support of UID/MAT/04721/2019 strategic project.
https://doi.org/10.1515/9783110755411-005
78 | A. Carvalho et al.
For this paper, normal play is the winning convention. The readers can consult any
edition of Winning Ways [4], specifically the sections on hackenbush, to gain further
insight. We assume general knowledge about normal play, but to keep the material
self-contained, we clarify some ideas about the concepts of ordinal sum and, also, the
particular case of ordinal sums of blue-red hackenbush strings.
80 | A. Carvalho et al.
G : H = {Gℒ , G : H ℒ Gℛ , G : H ℛ }.
The Colon Principle states that the form of the base matters, but not the form of the
subordinate. Formally,
Theorem 1.1 (McKay’s theorem). If G has no reversible options and K is the canonical
form of G, then G : H = K : H.
Figure 5.4: Example of signed binary notation for a blue-red hackenbush string.
Also, if G and H are two blue-red hackenbush strings, it is possible to have a closed
formula to evaluate the game value of G : H knowing the game values of G and H. That
is van Roode’s method [14].
Theorem 1.3 (van Roode’s method). Let G be a positive blue-red hackenbush string
whose game value is n + d with −1 < d = − 2kj ⩽ 0. Then we have the following.
1. If G is an integer and H is a positive blue-red hackenbush string, then G : H =
G + H.
2. If G is not an integer and H is a positive blue-red hackenbush string whose game
value is m + d′ , where m is a positive integer and −1 < d′ ⩽ 0, then G : H = n + d +
1
2j+m
(2m − 1 + d′ ).
3. If H is a negative blue-red hackenbush string whose game value is m + d′ , where
1
m is a negative integer and 0 ⩽ d′ < 1, then G : H = n + d + 2j+|m| (1 − 2|m| + d′ ).
Proof. This result is well known and follows from either Berlekamp’s or van Roode’s
rule for a blue-red hackenbush string [14] and [2].
82 | A. Carvalho et al.
3 5
G= = 8
=1− 8
H= = 3 21 = 4 − 1
2
5 1
n = 1, d = − ,j=3 m = 4, d′ = −
23 2
We want to evaluate G : H,
G:H= .
1 5 1 1 125
G :H =n+d+ (2m − 1 + d′ ) = 1 − + 7 (24 − 1 − ) = .
2j+m 8 2 2 256
G= = 2 85 = 3 − 3
8
H= = −1 43 = −2 + 1
4
3 1
n = 3, d = − ,j=3 m = −2, d′ =
23 4
Ordinal sums, clockwise hackenbush, and domino shave | 83
We want to evaluate G : H,
G:H= .
1 3 1 1 69
G :H =n+d+ (1 − 2|m| + d′ ) = 3 − + 5 (1 − 22 + ) = 2 .
2j+|m| 8 2 4 128
Van Roode’s method was conceived to evaluate ordinal sums of blue-red hack-
enbush strings. However, since blue-red hackenbush strings have no reversible op-
tions,1 by Theorem 1.1 this method can be used to evaluate an ordinal sum of numbers
where the base is in canonical form.
The subtree B1 is the part of the tree remaining after deleting t1 , and, for i > 1, not
counting with ti , Bi is the part of the tree that is eliminated by deleting ti−1 but not by
1 In fact, given a blue-red hackenbush string G and a Left option GL = {. . . | GLR , . . .}, we cannot
have G ⩾ GLR , since in the game G − GLR , Right wins by moving to GLR − GLR . A similar argument holds
for the lack of reversible Right options.
84 | A. Carvalho et al.
deleting ti or, in other words, the subtree above ti−1 that does not include ti . The idea
is represented in Figure 5.5.
Proof. The proof follows by induction on the size of G. If E(TG ) = {t1 }, then G = M1 .
We may now suppose that E(TG ) = {t1 , t2 , . . . , tn } and n > 1. Let H be the position
formed by G \ M1 , that is, the tree above but not including t1 , and the vertex s1 is the
ground. The trunk of H is {t2 , t3 , . . . , tn }.
In G, there are two types of moves. Either in M1 , delete t1 leaving B1 ; or delete ti ,
i > 1, which is a move in H. By induction, the move in H is to M1 : H L (M1 : H R ) for
some Left (Right) option of H. Also by induction, H = M2 : (. . . : (Mn−1 : Mn ) . . .). Then
it follows that
G = {M1L , M1 : H ℒ | M1R , M1 : H ℛ }
= M1 : (M2 : (. . . : (Mn−1 : Mn ) . . .)),
Theorem 2.2 shows that we will have to evaluate ordinal sums. If the values
were arbitrary, then no formula could be given. However, clockwise blue-red hack-
enbush positions have similar strategic features to blue-red hackenbush strings.
Specifically, for either player, the unique best move is their highest, and the value is a
number. This we prove next. Each Mi has only one option, that of deleting the trunk
Ordinal sums, clockwise hackenbush, and domino shave | 85
edge. Therefore the ordinal sums will be of the form {x | } : y or { | x } : y for numbers x
and y. A closed formula for this type of ordinal sum is one of the main contributions
of this paper (Subsection 2.2). Before that, we prove that clockwise blue-red hack-
enbush positions only have numbers as game values and that the best options for the
players are the topmost allowed moves.
Lemma 2.3. Let G be a clockwise blue-red hackenbush position. If ti and tj are blue
edges and j > i, then Gj > Gi . If ti and tj are red edges and i < j, then Gj < Gi .
Proof. We first assume that ti and tj are both blue edges and show that Gj − Gi > 0.
The proof follows by induction on the number of edges in G. If G consists of exactly
two blue edges, then G = 2. If Left deletes the higher edge, then this leaves a tree with
exactly one blue edge which has value 1. If she deletes the lower edge, then this leaves
a tree with zero edges, and it has value 0. Thus the lemma holds for the base case. We
now suppose G has more than two edges.
Left, going first, can win by deleting ti in Gj since this results in Gi − Gi = 0.
Now consider Right moving first. If Right plays an edge of Gj but does not eliminate
the edge ti , then Left responds in Gj by deleting ti . Again, this results in Gi − Gi = 0. If
Right plays in Gj and does eliminate ti , then he has deleted an edge on the trunk, i. e.,
some tℓ , ℓ < i. This leaves Gℓ −Gi . Left responds in −Gi by deleting tℓ that, by symmetry,
is a blue edge. This gives Gℓ − Gℓ = 0.
The last remaining case is that Right deletes an edge on the new trunk in −Gi . Let
the trunk of Gi be T1 = {t1′ , t2′ , . . . , tm
′
} where ta′ = ta for 1 ⩽ a ⩽ i − 1. Right deletes tℓ′
for i ⩽ ℓ ⩽ m. We claim that deleting ti in Gj is a winning move. To see this, let H be
identical to Gi but with an extra blue edge tm+1 ′
at the top of T1 . After Right has deleted
tℓ in −Gi and Left ti in Gj , the situation is identical to playing in Hm+1 − Hℓ . Now both
′
ti and tj are not in H, and thus H has at least one fewer edge than G. It follows by
induction that Hm+1 − Hℓ > 0.
The proof for when ti and tj are both red edges follows from considering negatives.
The proof is similar to the above arguments.
Proof. Part 1 follows immediately from Lemma 2.3. Part 1 gives that G has only one Left
and one Right undominated option, i. e., G = {GL | GR }. By induction on the options
both options GL and GR are numbers. Let H be G with an extra blue edge on the top of
the trunk. Both G and GL are Left options of H, and GL < G by Lemma 2.3. Similarly,
by adding a red edge we have G < GR . Since GL < GR , G is a number.
86 | A. Carvalho et al.
{ n | } : 2 = {{ n | } : 1 | } = {{{ n | } | } | }.
Similarly,
{ n | } : −2 = { n | { n | } : −1} = { n | { n | {n | }}}.
In either case, the Simplicity Rule must be applied three times in a row, and one of the
options remains the same. This motivates the following definition.
If a and b are numbers and a < b, then the value of {a | b} is the dyadic rational
p/2 with a < p/2q < b, and q is minimal. In other words, we will often use the fact
q
{a | b} is the number c, a < c < b, that has the fewest number of digits in its binary expansion.
The next result makes explicit the simplicity rule for evaluating {a | b} for numbers
0 ⩽ a < 1 and a < b. We will then generalize the rule for iterated ordinal sums in Sec-
tion 2.2. The procedure will use the binary expansions of numbers. Each dyadic only
has a finite number of nonzero bits in its binary expansion; however, the procedure
sometimes uses 0-bits past the last 1-bit. Therefore, although we denote the binary
expansion of d by d =2 0.d1 d2 . . . dn , when we refer to the “first index” or “first occur-
rence”, we may consider the infinite binary expansion. We abuse the “=2 ” notation
to mean that the important terms following the equal sign will be in binary. If this is
followed by another “=” sign, then we have reverted to base 10.
Theorem 2.5. Let d be a dyadic rational such that 0 < d < 1 and d =2 0.d1 d2 . . . dn . Let
d < d′ ⩽ +∞, and if d′ < 1, then let d′ =2 0.d1′ d2′ . . . dm ′
.
1. If d > 1, then {d | d } = 1.
′ ′
2. If d′ = 1 and i is the index of the first 0-bit of the binary expansion of d, then
{d | 1} = 1 − 21i .
3. If d′ < 1, then let i be the first index such that di = 0 and di′ = 1. Also, let j be the
least index, j > i, and dj = 0.
If d′ =2̸ 0.d1 d2 d3 . . . di−1 1, then {d | d′ } =2 0.d1 d2 d3 . . . di−1 1.
If d′ =2 0.d1 d2 d3 . . . di−1 1, then {d | d′ } =2 0.d1 d2 d3 . . . dj−1 1.
21 45 11
64 } =2 {0.10101 | 0.101101} =2 0.1011 = 16 ;
{
32
Ordinal sums, clockwise hackenbush, and domino shave | 87
75 19
} =2 {0.1001011 | 0.10011} =2 0.10010111 = 151 .
{
128 32 256
We have seen that there are two types of ordinal sums that occur in clockwise
blue-red hackenbush. We write the formulas explicitly. The first, in Theorem 1.3, is
standard and appears in the analysis of blue-red hackenbush strings. The second
happens when the literal form of the base is {x | } or { | x}, where x is a number. That is
analyzed in the next section.
{Gℒ + n | Gℛ + n} = n + {Gℒ | Gℛ }
[1, 4, 6, 13]. The following theorem describes a version of the translation principle for
ordinal sums. Once we have this result, the case { d | }: number (0 ⩽ d < 1) turns out
to be the only case to study.
Lemma 2.7 (Translation principle for ordinal sums of numbers). Let 0 ⩽ d < 1 be a
dyadic rational, let w be any number, and let n be an integer. Then
{n + d | } : w = n + ({d | } : w).
{n + d | } : {wL | wR }
88 | A. Carvalho et al.
{n + d, {n + d | } : wL {n + d | } : wR }
=
{n + d, n + ({d | } : wL ) n + ({d | } : wR )}
⏟⏟⏟⏟⏟
=⏟⏟
induction
n + {d, {d | } : wL {d | } : wR }
⏟⏟⏟⏟⏟
=⏟⏟
translation principle
Lemma 2.8. Let d be a dyadic rational, 0 ⩽ d < 1, and let m be an integer. If m > 0, then
{d | } : m = {{d | } : m − 1 | }.
If m < 0, then
{d | } : m = {d | {d | } : (m + 1)}.
{d | } : m = {d, {d | } : 0, {d | } : 1, . . . , {d | } : (m − 1) | },
{d | } : (−m) = {d | {d | } : 0, {d | } : −1, . . . , {d | } : (−m + 1)}.
({d | } : m − 1) : 1 = {{d | } : m − 1 | } = {d | } : m.
Ordinal sums, clockwise hackenbush, and domino shave | 89
If m < 0, then
Observation 2.11. One consequence of Theorem 2.10 is that, for 0 ⩽ d < 1 and a posi-
tive integer m, { d | } : m = { d | } + m, that is, the ordinal sum coincides with the usual
sum.
Example 2.12.
As mentioned before, the signed binary notation is more useful for game practice
because of the correspondence of 1-“blue edge” and 1-“red edge”. The following theo-
rem, concerning the use of signed binary representations, is presented without proof,
since it is similar to the previous one.
Theorem 2.13. Let d be a dyadic rational such that 0 < d < 1, and let 1.1d2 . . . dk be its
signed binary expansion. Let m be a negative integer. The signed binary expansion of
{d | } : m is obtained in the following way.
90 | A. Carvalho et al.
Case 1. If the number of minus ones in the signed binary expansion of d is larger than
|m|, then the signed binary expansion of {d | } : m is 1.1d2 d3 . . . di−1 , where i is the index
of the (|m| + 1)th 1-bit in the signed binary expansion of d.
Case 2. If the number of minus ones in the signed binary expansion of d (n) is less than
or equal to |m|, then the signed binary expansion of {d | } : m is 1.1d2 d3 . . . dk 1 ⏟⏟⏟⏟⏟⏟⏟⏟⏟⏟⏟⏟⏟⏟⏟
1 1 . . . 1 1.
′
|m|−n 1 s
Example 2.14.
Proof. These are restatements, via Lemma 2.8, of Theorem 2.10 for part 1 and Theo-
rem 2.10 for parts 2 and 3.
k d′
G =n+ + .
2j 2j
Case 1. d = 0 and m ⩾ 0.
1 1 d′
G′ = {0 | } : (m + d′ ) = 1 + (1 − 2|m|
+ d ′
) = + .
2|m| 2|m| 2|m|
Ordinal sums, clockwise hackenbush, and domino shave | 91
1 1 d′
By Corollary 2.15, {d | } : m = 2|m|
, G′ = 2|m|
+ 2|m|
, and the theorem holds.
This is the hardest case. To prove it, we will construct a blue-red hackenbush
string H whose value is 2kj + d2j (Part 1). We will then prove that G′ − H is a 𝒫 -position
′
(Part 2).
is the signed binary expansion of the game value of {d | } : m. The hypothesis of the
current theorem states that this is 2kj . Hence there are j binary places.
Now the game value of the following blue-red hackenbush string H is 2kj + d′
2j
.
That happens because the added rightmost part d′ is shifted by j binary places.
k d′
1.11 . . . 111 . . . 11 . . . 11 . . . 1 1 ⏟⏟⏟⏟⏟⏟⏟⏟⏟⏟⏟⏟⏟⏟⏟
⏟⏟⏟⏟⏟⏟⏟⏟⏟⏟⏟⏟⏟⏟⏟⏟⏟⏟⏟⏟⏟⏟⏟⏟⏟⏟⏟⏟⏟⏟⏟⏟⏟⏟⏟⏟⏟⏟⏟⏟⏟⏟⏟⏟⏟⏟⏟ 1 1 . . . 1 1 11d 2 d3 . . . dw = j + j .
′ ′ ′
⏟⏟⏟⏟⏟⏟⏟⏟⏟⏟⏟⏟⏟⏟⏟⏟⏟⏟⏟⏟⏟⏟⏟
′ ′ 2 2
d (q 1 s) |m|−q 1 s
⏟⏟⏟⏟⏟⏟⏟⏟⏟⏟⏟⏟⏟⏟⏟⏟⏟⏟⏟⏟⏟⏟⏟⏟⏟⏟⏟⏟⏟⏟⏟⏟⏟⏟⏟⏟⏟⏟⏟⏟⏟⏟⏟⏟⏟⏟⏟⏟⏟⏟⏟⏟⏟⏟⏟⏟⏟⏟⏟⏟⏟⏟⏟⏟⏟⏟⏟ digits of d′
{d| }:m= kj (|m| 1 s)
′
2
. . . 111 . . . 11 . . . 11 . . . 1 | } : ⏟⏟⏟⏟⏟⏟⏟⏟⏟
⏟⏟⏟⏟⏟⏟⏟⏟⏟⏟⏟⏟⏟⏟⏟⏟⏟⏟⏟⏟⏟⏟⏟⏟⏟⏟⏟⏟⏟⏟⏟⏟⏟⏟⏟⏟⏟⏟⏟⏟⏟⏟⏟⏟⏟⏟⏟
{1.11 1 . . . 1 11d 2 d3 . . . dw
′ ′ ′
⏟⏟⏟⏟⏟⏟⏟⏟⏟⏟⏟⏟⏟⏟⏟⏟⏟⏟⏟⏟⏟⏟⏟
′ ′
d (q 1 s) (|m| 1 s) digits of d′
+
92 | A. Carvalho et al.
is a 𝒫 -position.
First, there is a correspondence between the moves in the digits of d′ and −d′ .
Also, there is a correspondence between Right moves in the |m| 1s of the subordinate
of the upper component and Left moves in the ones of {| −d} : (−m) in the bottom
component. Regarding those correspondences, there is a Tweedledee–Tweedledum
strategy.
Second, if Left moves to 1.11 . . . 111 . . . 11 . . . 11 . . . 1 = d in the upper component (en-
tering the base), then Right answers by removing the 1 immediately after the digits of
−d in the bottom component, and vice-versa.
Third, if Right removes any 1 of the digits of −d in the bottom component, then
Left answers with 1.11 . . . 111 . . . 11 . . . 11 . . . 1 (entering the base) in the upper component,
and wins.
Since the second player wins, G′ − H ∈ 𝒫 , and G′ = H = 2kj + d2j .
′
3
{d | } : m =2 {0.101 | } : −1 =2 0.11 = .
4
Exercise. Verify that the game value of the clockwise blue-red hackenbush posi-
tion exhibited in Figure 5.2 is given by
1
{−2 | } : (−1 : ({−1 | } : −1)) = −1 .
4
3 domino shave
We first find a normalized version of domino shave and then show that this is equiva-
lent to clockwise hackenbush by giving a bijection between the positions. We then
note which selection of dominoes gives rise to games already in the literature. As well,
we show that Hetyei’s Bernoulli game is a subset of stirling shave.
94 | A. Carvalho et al.
Example 3.1. Let G = (2, 4)(7, 3)(1, 2)(4, 4)(3, 2). The steps of the algorithm are shown
in Table 5.1, where a change of color is indicated by [a, b].
(1, 1) (2, 4)(7, 3)(1, 2)(4, 4)(3, 2) (1, 2)(3, 2) (2, 4)(7, 3)[1, 2](4, 4)[2, 1]
(2, 3) (2, 4)(7, 3)[1, 2](4, 4)[2, 1] (4, 4) (2, 4)(7, 3)[1, 2][3, 3][2, 1]
(3, 5) (2, 4)(7, 3)[1, 2][3, 3][2, 1] (2, 4)(7, 3) [5, 6][6, 5][1, 2][3, 3][2, 1]
(4, 7) [5, 6][6, 5][1, 2][3, 3][2, 1] (5, 6)(6, 5)(1, 2)(3, 3)(2, 1)
Lemma 3.2. Let f be the largest index in Ea , a > 1. The domino df +1 prevents every
domino in Ea from being played.
Proof. Let g be the smallest index of the dominoes in Ea . This gives Ea = {g, g +1, . . . , f }.
After df +1 has been played, then every domino di , g ⩽ i ⩽ f , is playable. Thus
Since g ∈ Ea , there exists dj , j > g, j ∈ Ea−1 , that prevents dg from being played. (If
no such domino exists, then g ∈ Ea−1 .) We may assume that j is the least index. Thus
Ordinal sums, clockwise hackenbush, and domino shave | 95
min{lg , rg } > min{lj , rj }. Since f + 1, j ∈ Ea−1 and f + 1 ⩽ j, then dj does not prevent df +1
being played. This gives min{lj , rj } > min{lf +1 , rf +1 }. Combining the inequalities yields
min{li , ri } > min{lf +1 , rf +1 } for g ⩽ i ⩽ f , that is, df +1 prevents all of Ea being played.
Lemma 3.4. If D is a domino shave position and D′ is its normalized version, then
D = D′ .
Theorem 3.5. There is a bijection f between domino shave and clockwise hacken-
bush positions such that D − f (D) = 0.
Claim. D − T = 0.
Proof of Claim. This follows in a similar fashion the previous equivalence result. The
mimic strategy is to play the corresponding other object of the same index.
If a = 1, then all edges and dominoes are playable and remain playable until elim-
inated.
If i ∈ ̸ Ea , then both the dominoes in D′′ and the edges of T ′′ do not prevent di and
ei from being played.
Suppose i ∈ Ea . If di is playable, then dj+1 has been eliminated. Therefore in T,
ej+1 has also been eliminated. The string T ′′ is now part of the trunk, and every edge,
including ei is playable. If ei is playable, then it is on the trunk, and ej+1 has been
eliminated. Therefore dj+1 has been eliminated, and every domino in D′′ , including di
is playable.
This proves the claim and the equivalence.
From a clockwise hackenbush position it is possible to get the normalized
domino shave position by realizing that the first trunk corresponds to the domi-
noes in E1 and the next strings to the left, in order, correspond to the dominoes of
E2 , E3 , . . . , En . The normalization algorithm then gives a set of dominoes.
Ordinal sums, clockwise hackenbush, and domino shave | 97
Bibliography
[1] M. H. Albert, R. J. Nowakowski, and D. Wolfe, Lessons in Play: An Introduction to Combinatorial
Game Theory, A. K. Peters, 2007.
[2] E. R. Berlekamp, The Hackenbush number system for compression of numerical data, Inform.
and Control 26 (1974), 134–140.
[3] E. R. Berlekamp, Yellow-Brown Hackenbush, in Games of No Chance 3, pp. 413–418, Cambridge
Univ. Press, 2009.
98 | A. Carvalho et al.
[4] E. R. Berlekamp, J. H. Conway, and R. K. Guy, Winning Ways for Your Mathematical Plays,
Academic Press, London, 1982.
[5] C. L. Bouton, Nim, a game with a complete mathematical theory, Annals of Mathematics 3
(1902), 35–39.
[6] J. H. Conway, On Numbers and Games, Academic Press, 1976.
[7] A. Fink, R. J. Nowakowski, A. N. Siegel, and D. Wolfe, Toppling Conjectures, in Games of No
Chance 4, pp. 65–76, Cambridge University Press, 2015.
[8] M. Fisher, R. J. Nowakowski, and C. Santos, Sterling Stirling play, Internat. J. of Game Theory
47(2) (2018), 557–576.
[9] G. Hetyei, Enumeration by kernel positions, Adv. in Appl. Math. 42 (2009), 445–470.
[10] G. Hetyei, Enumeration by kernel positions for strongly Bernoulli type truncation games on
words, J. Combin. Theory Ser. A 117 (2010), 1107–1126.
[11] N. A. McKay, Forms and Values of Number-like and Nimber-like Games, PhD Thesis, Dalhousie
University, 2016.
[12] N. A. McKay, R. Milley, and R. J. Nowakowski, Misère-play Hackenbush Sprigs, Internat. J. Game
Theory 45 (2016), 731–742.
[13] A. N. Siegel, Combinatorial Game Theory, American Math. Soc., Providence, 2013.
[14] T. van Roode, Partizan Forms of Hackenbush Combinatorial Games, M. Sc. Thesis, University of
Calgary, 2002.
Alexander Clow and Stephen Finbow
Advances in finding ideal play on poset games
Abstract: Poset games are a class of combinatorial games that remain unsolved. Soltys
and Wilson proved that computing winning strategies is in PSPACE and aside from
particular cases such as nim and N-Free games, P time algorithms for finding ideal
play are unknown. In this paper, we present methods to calculate the nimber of
poset games allowing for the classification of winning or losing positions. The results
present an equivalence of ideal strategies on posets that are seemingly unrelated.
1 Introduction
Poset games are impartial combinatorial games whose game boards are partially or-
dered sets (posets) P on which players take turns removing an element p ∈ P and every
element p′ ≥P p from P. We define P − p≤ = P \ {p′ : p ≤ p′ }. Each turn a player must
remove an element if they can. An example of some moves in a poset game is given in
Figure 6.1.
In this paper, we consider normal play games, where the first player having no
move loses. Poset games include Nim, Chomp, Subset Take-Away, Divisors, and
Acknowledgement: Research of S. Finbow was funded by Natural Sciences and Engineering Research
Council of Canada grant numbers 2014-06571.
The authors would like to thank Dr. Darien DeWolfe and Dr. Richard Nowakowski for sharing their in-
sights and thoughts on various aspects of the paper.
Alexander Clow, Stephen Finbow, Department of Mathematics and Statistics, St. Francis Xavier
University, Antigonish, Nova Scotia, Canada, e-mails: x2018ytd@stfx.ca, sfinbow@stfx.ca
https://doi.org/10.1515/9783110755411-006
100 | A. Clow and S. Finbow
Green Hackenbush on trees. As an example, Figure 6.2 shows a game of chomp and
the equivalent poset game.
Nim, central to the theory of all impartial games, is a poset game played on any number
of disjoint totally ordered sets of finite or transfinite cardinality called piles (see Fig-
ure 6.3) and as a result serves as one of the simplest poset games. It is well known that
a game of Nim is in 𝒫 (the set of previous player win games, also called 𝒫 -positions)
if and only if the binary XOR sum of the pile heights is 0 [3].
A1 A2 A3 B1 B2
Aside from Nim, very little is known about ideal play on general poset games. Soltys
and Wilson [7] proved that computing a winning strategy is in PSPACE, and aside from
particular cases such as Nim and N-Free games [4], which are not the focus of this pa-
per, P time algorithms for finding ideal play are unknown. Byrnes [2] also proved non-
constructively that local periodicity exists in Chomp and poset games that resemble
Chomp. Attempts at constructive results have thus far been largely unsuccessful [8].
Computational efforts, like those of Zeilburger [9], demonstrate that this local period-
icity leads to no discernible global pattern even in cases as small as 3 by n Chomp.
Partisan games are games where players may have different sets of moves. We
can describe a game G as the set of its left and right options (or moves). This is often
denoted G = {Gℒ | Gℛ }, where Gℒ and Gℛ are the sets of left and right options, re-
spectively. Impartial games are exactly those games where the left and right options
are the same for G and all followers of G. Two impartial games G and H are equal,
G = H, if G + H is a 𝒫 -position. To describe this another way, playing on the games G
and H where each turn a player chooses to move on G or on H is a 𝒫 -position. Sprague
Advances in finding ideal play on poset games | 101
and Grundy independently proved the following result, which is vital to the study of
impartial games in normal play.
The Sprague–Grundy Theorem ([1]). For all impartial combinatorial games G, there ex-
ists a Nim pile that is equivalent to G.
The game with a Nim pile of size n is denoted as ∗n, and G = ∗n denotes that
the game G is equivalent to a Nim pile of size n. The 𝒢 -value (Grundy value or Grundy
number) of an impartial game G is exactly the n such that G = ∗n. Equivalently, we
write 𝒢 (G) = n. We define the option value set of an impartial game G, which we denote
G∗ , to be the set of 𝒢 -values of all the options of G. An important tool in determining
𝒢 -values is the mex-rule (minimal excluded value rule). Formally, the mex of a set of
nonnegative integers is the smallest nonnegative integer not in the set. For example,
mex{0, 2} = 1. From Sprague and Grundy’s theory [1], 𝒢 (G) = mex(G∗ ) for all impartial
games G. We will write G ≡ H if G∗ = H ∗ .
For impartial games, the canonical form of a value ∗n is exactly the Nim pile of
size n (or any other game with the same game tree). We will call G weakly canonical
if G ≡ ∗n, which is equivalent to |G∗ | = 𝒢 (G). An example of a game that is weakly
canonical is G = {{∗ | ∗}, ∗2 + ∗3, ∗||{∗ ∗}, ∗2 + ∗3, ∗} ≡ ∗2. An example of a game that
is not weakly canonical is H = {0, ∗2 | 0, ∗2}.
A fence F is a poset F = {f0 , f1 , f2 , . . . , fn } such that f0 > f1 , f1 < f2 , f2 > f3 , f3 <
f4 , . . . , fn−1 < fn or f0 < f1 , f1 > f2 , f2 < f3 , f3 > f4 , . . . , fn−1 > fn if n is even and f0 > f1 ,
f1 < f2 , f2 > f3 , f3 < f4 , . . . , fn−1 > fn or f0 < f1 , f1 > f2 , f2 < f3 , f3 > f4 , . . . , fn−1 < fn if n is
odd and such that these are all comparabilities between the points. The points f0 and
fn are the endpoints of the fence. A poset P is connected if and only if for all p1 , p2 ∈ P,
there is a fence F ⊂ P with endpoints p1 , p2 . A poset that is not connected is called
disconnected [6].
In this paper, we provide insights into how to play on connected poset games. We
do so by factoring/partitioning a connected poset A into subposets that give mean-
ingful information about 𝒢 (A) and the subgames of A. The work is related to playing
games on the ordinal sum of posets (which is exactly the ordinal sum of poset games)
but generalizes this idea. Let A and B be posets with disjoint underlying sets. Then the
ordinal sum A : B is the poset on A ∪ B with x ≤ y if either x, y ∈ A and x ≤A y, or
x, y ∈ B and x ≤B y, or x ∈ A and y ∈ B. In other words, any move in A eliminates all
possible moves in B for the remainder for the game. This concept has been generalized
to impartial games. The following result is due to Fisher, Nowakowski and Santos [5],
where mex(S, 0) = mex(S), and for k ≥ 1, mex(S, k) = mex(S′ ) when
Clearly, each order compressing function is a homomorphism (see Figure 6.4); how-
ever, every order compressing function is also order reflecting. The converse of this
statement is false. Figure 6.5 gives an example of a homomorphism that is not order
compressing.
f1
x y h(x) = h(y)
h
w z h(z) = h(w)
3 Equivalencies of games
In this section, we establish two results allowing for the reduction of poset games to
simpler poset games using order compressions. The first (Theorem 3) is a generaliza-
tion of the colon principle originally given in [3]. The second (Theorem 4) is a result
that deals with the interchangeability of particular classes of subposets that are option
equivalent.
Proof. Assume that f −1 (α) = g −1 (α) and consider A + B. It is sufficient to show that
o(A + B) = 𝒫 . Without loss of generality, for all moves on A not on f −1 (α), the second
player will mirror the move in B. If a player moves on f −1 (α), g −1 (α), then respond as if
you were playing f −1 (α) + g −1 (α). By the maximality of x any such move will not effect
any y ∈ f −1 (β) or z ∈ g −1 (β). By induction, all of the above countermoves are winning
moves. Thus o(A + B) = 𝒫 .
Proof. By Theorem 2 we only need to show that if A = B, then f −1 (α) = g −1 (α). Assume
that A = B and consider f −1 (α) + g −1 (α). It suffices to show that o(f −1 (α) + g −1 (α)) = 𝒫 .
Assume for contradiction that o(f −1 (α) + g −1 (α)) = 𝒩 . Then there exists an element z
on f −1 (α) or g −1 (α) such that, without loss of generality, o((f −1 (α) − z≤ ) + g −1 (α)) = 𝒫 .
By Theorem 2 this implies that playing z on A + B is a winning move, which contra-
dicts our assumption that A = B. Hence no such z exists, and, as a result, o(f −1 (α) +
g −1 (α)) = 𝒫 .
A Q B
f g
gives an example of a nontrivial poset. The following corollary implies that this poset
is a previous player win position.
Proof. Let α1 , α2 , . . . , αd be the set of elements in Q such that f −1 (αi ) ≇ g −1 (αi ). Let Ai
be the poset such that hi : Ai → Q is an order compression and for all j ≤ i, h−1 i (αj ) ≅
g (αj ), whereas for all k > i, hi (αj ) ≅ f (αj ). Then we say that A ≅ A0 and B ≅ Ad .
−1 −1 −1
Consider Ai and Ai+1 . If |Q| = 1, then the statement is trivial. Otherwise, without
lose of generality, for all moves on h−1 i (αi+1 ) in A, there exists a move on hi+1 (αi+1 ) in
−1
B such that the two resulting games are equal by our assumption that f (β) ≡ g −1 (β)
−1
and Theorem 3. If either player moves anywhere else on Ai or Ai+1 , the resulting games
are equal by induction as Ai ≡ Ai+1 implies Ai = Ai+1 . Thus Ai ≡ Ai+1 . Hence, by the
transitivity of ≡, A0 ≡ Ad .
4 Applications
In this section, we provide two examples of applications of the results of the previous
section. Consider the posets drawn in Figure 6.9. We claim that these all have the same
nimber.
≡ =
Figure 6.9: Three posets with nimber 3, given by a particular case of Theorem 3 (Colon Principle) and
Theorem 4.
To establish the first equivalence shown in Figure 6.9, consider the subposets consist-
ing of red elements. The option value set of these subposets is {0, 2} and hence the
equivalence follows from Theorem 4. The second equivalence in Figure 6.9 follows
from applying Theorem 3 as we claim the subposets consisting of blue elements have
nimber 1. To demonstrate this, the subposet of blue elements is redrawn and recolored
in Figure 6.10. Observe that the subset of green elements in Figure 6.10 has nimber 0
and contains a maximal element of the poset. Theorem 3 now implies the first equiv-
alence in Figure 6.10. The second equivalence in Figure 6.10 can be found by applying
Theorem 1.
106 | A. Clow and S. Finbow
= =
Figure 6.10: Equality of the subposet of blue elements from Figure 6.9 by Theorem 1 and Theorem 3.
Theorem 4 points to the importance of determining the set of posets with a given op-
tion value set. A natural avenue of investigation is given a set of nonnegative integers
S, enumerate the number of games that have option value set S.
Theorem 5. For all S ⊂ ℕ0 such that S ≠ {}, {P : P ∗ = S} ≠ 0 if and only if |{P : P ∗ = S}|
is infinite.
5 Conclusion
In this paper, we establish equivalencies in ideal play between poset games that have
obvious and not so obvious similarities. In particular, this work contributes to looking
at ideal play on connected poset games. Whether the converse of Theorem 4 is true
remains an open question. We end the paper by asking a related question.
Let the lexicographic product of two posets A and B, denoted here A ⊗ B, be the
poset given by the Cartesian product A×B ordered by the following rule: (a, b) ≤ (a′ , b′ )
if and only if
∙a ≤A a′ and ∙ if a = a′ , then b ≤B b′ .
If true, then this would imply that for the lexicographic product A ⊗ B of any two
posets that satisfy the assumptions of Conjecture 1, 𝒢 (A ⊗ B) = 𝒢 (A)𝒢 (B). Moreover,
this implies that if the left factor A has nimber 0, then 𝒢 (A ⊗ B) = 0. Note that if B = 0
and B is weakly canonical, then A ⊗ B = 0. Verifying Conjecture 1 would point to the
possibility of the existence of other equations like that of the ordinal sum in Theorem 1.
Bibliography
[1] M. H. Albert, R. J. Nowakowski, D. Wolfe, Lessons in Play: An Introduction to Combinatorial Game
Theory, CRC Press, Boca Raton, 2019.
[2] S. Byrnes, Poset-game periodicity, Integers 3 (2003), Article G3.
[3] J. H. Conway, R. K. Guy, E. R. Berlekamp, Winning Ways for Your Mathematical Plays, Volume 1,
AK Peters, Natick, 1983.
[4] S. A. Fenner, J. Rogers, Combinatorial game complexity: an introduction with poset games,
arXiv:1505.07416 (2015).
[5] M. Fisher, R. J. Nowakowski, C. Santos, Sterling Stirling play, Internat. J. Game Theory 47(2)
(2018), 557–576.
[6] B. S. W. Schröder, Ordered Sets, Birkhäuser, Boston, 2003.
[7] M. Soltys, C. Wilson, On the complexity of computing winning strategies for finite poset games,
Theory Comput. Syst. 48(3) (2011), 680–692.
[8] D. Zeilberger, Chomp, recurrences and chaos, J. Difference Equ. Appl. 10(13–15) (2004),
1281–1293.
[9] D. Zeilberger, Three-rowed chomp, Appl. Math, 26(2) (2001), 168–179.
Erik D. Demaine and Yevhenii Diomidov
Strings-and-Coins and Nimstring are
PSPACE-complete
In memoriam Elwyn Berlekamp (1940–2019),
John H. Conway (1937–2020),
and Richard K. Guy (1916–2020)
1 Introduction
Elwyn Berlekamp loved Dots and Boxes. He wrote an entire book The Dots and Boxes
Game: Sophisticated Child’s Play [3] devoted to explaining the mathematical under-
pinnings of the game, after they were first revealed in Berlekamp, Conway, and Guy’s
classic book Winning Ways exploring many such combinatorial games [2, Ch. 16]. At
book signings for both books1 and after talks he gave about these topics [1], Elwyn
routinely played simultaneous exhibitions of Dots and Boxes—him against dozens of
players, in the style of chess masters.
As many children will tell you, Dots-and-Boxes is a simple pencil-and-paper game
taking place on an m × n grid of dots. Two players alternate drawing edges of the grid
with one special rule: when a player completes the fourth edge of one or two 1×1 boxes,
that player gains one or two points, respectively, and must immediately draw another
edge (a “free move”, which is often a blessing and a curse). The game ends when all
grid edges have been drawn; then the player with the most points wins. (Draws are
possible on boards with an even number of squares.)
1 The first author had the honor of playing such a game against Elwyn at a book signing on April 13,
2004, at Quantum Books in Cambridge, Massachusetts. Elwyn won.
Acknowledgement: This work was initiated during an MIT class on Algorithmic Lower Bounds: Fun
with Hardness Proofs (6.892, Spring 2019). We thank the other participants of the class for providing
an inspiring research environment.
Erik D. Demaine, Yevhenii Diomidov, MIT Computer Science and Artificial Intelligence Laboratory,
Cambridge, Massachusetts, USA, e-mails: edemaine@mit.edu, diomidov@mit.edu
https://doi.org/10.1515/9783110755411-007
110 | E. D. Demaine and Y. Diomidov
An equivalent way to think about Dots-and-Boxes is in the dual of the grid graph.
Think of each 1 × 1 square as a dual vertex or coin worth one point, “tied down” by four
incident strings or dual edges. Interior strings connect two coins, whereas boundary
strings connect a coin to the ground (not worth any points). (Equivalently, boundary
edges have only one endpoint.) Now players alternate cutting (removing) strings, and
when a player frees one or two coins (removing the last strings attached to them), that
player gains the corresponding number of points and must move again. The game
ends when all strings have been cut; then the player with the most points wins.
Strings-and-Coins [2, pp. 550–551], [3, Ch. 2] is a generalization of this game to ar-
bitrary graphs, where vertices represent coins, and edges represent strings, which can
connect up to two coins (the other endpoints being considered “ground”). Nimstring
[2, pp. 552–554], [3, Ch. 6] is the closely related game, where we modify the win condi-
tion to normal play: the first player unable to move loses. Nimstring is known to be a
particular case of Strings-and-Coins, a fact we use in our results; see Lemma 1.
Related work
Dots-and-Boxes, Strings-and-Coins, and Nimstring are surprisingly intricate games
with intricate strategy [2, 3]. On the mathematical side, even 1 × n Dots-and-Boxes
is largely unsolved [8, 5].
To formalize this difficulty, Winning Ways [2] argued in 1984 that deciding the win-
ner of a Strings-and-Coins position is NP-hard by a reduction from vertex-disjoint cycle
packing. Around 2000, Eppstein [7] pointed out that this reduction can be adapted to
apply to Dots-and-Boxes as well; see [6].
This work left some natural open problems, first explicitly posed in 2001 [6]: are
Dots-and-Boxes, Strings-and-Coins, and Nimstring NP-complete or do they extend
into a harder complexity class? Being bounded two-player games, all three naturally
lie within PSPACE; are they PSPACE-complete?
Results
In this paper, we settle two out of three of these 20-year-old open problems by proving
that Strings-and-Coins and Nimstring are PSPACE-complete. This is the first improve-
ment beyond NP-hardness since the original Winning Ways result from 1984. Our re-
ductions from Game SAT are relatively simple but subtle. Along the way, we prove
the PSPACE-completeness of a new Strings-and-Coins variant called Coins-Are-Lava,
where the first person to free a coin loses.
Our constructed game positions rely on multigraphs with multiple copies of some
edges/strings, a feature not present in instances corresponding to Dots-and-Boxes.
Thus our results do not apply to Dots-and-Boxes. A generalization of Dots-and-Boxes
that we might be able to target is weighted Dots-and-Boxes, where each grid edge has a
specified number of times it must be drawn before it is “complete” and thus can form
the boundary of a 1 × 1 box. This game corresponds to Strings-and-Coins on planar
Strings-and-Coins and Nimstring are PSPACE-complete | 111
multigraphs whose vertices can be embedded at grid vertices such that edges have
unit length. However, our multigraphs are neither planar nor maximum-degree-4, so
they cannot be drawn on a square grid, and so our approach does not resolve the com-
plexity of weighted Dots-and-Boxes.
In independent work, Buchin, Hagedoorn, Kostitsyna, and van Mulken [4]
proved that (unweighted) Dots-and-Boxes is PSPACE-complete by a reduction from
Gpos (POS CNF) [9] (roughly the same problem that we reduce from, Gpos (POS DNF) [9]).
They construct an instance where, after variable setting, one player’s winning strategy
is to select a maximum set of disjoint cycles. This approach works well for Dots-and-
Boxes (and thus Strings-and-Coins), where the goal is to maximize score, but not for
Nimstring like our approach does. Thus the two approaches are incomparable.
2 Nimstring
We begin with more formal definitions of the games of interest and some known lem-
mas about them.
Next, we prove the standard result that Nimstring is equivalent to a particular case
of Strings-and-Coins, and thus hardness of the former implies hardness of the latter.
Lemma 1 ([2, p. 552]). For every graph G, there exists an efficiently computable graph
H such that the winner of Nimstring(G) is the same as the winner of Strings-And-
Coins(H).
Lemma 2 ([2, p. 557]). If G has a degree-2 vertex adjacent to exactly one degree-1 vertex,
then the first player can always win in Nimstring(G). Such positions are known as loony
positions.
G′ G′
b b
a a
(a) String b connects to a vertex of degree at least 2. (b) String b connects to the ground.
Proof. Let a be the string between the two coins, let b be the other string connected to
a degree-2 coin, and let G′ be the rest of the graph (Figure 7.1). One of the players has
a winning strategy in Nimstring(G′ ).
– If the first player has a winning strategy in Nimstring(G′ ), then we cut strings a
and b in this order. We get exactly G′ , and it is still our turn. By assumption we
can win.
– If the second player has a winning strategy in Nimstring(G′ ), then we just cut
string b. We get graph G′ (plus an extra edge that does not affect the game), and
it is our opponent’s turn. By assumption opponent cannot win.
3 Coins-are-Lava
We introduce a variant game played on strings and coins that we find easier to analyze,
called Coins-are-Lava.2
2 For a “practical” motivation for this game, consider the 1933 Double Eagle U. S. coin: until 2002,
possession of this coin could result in imprisonment [10].
Strings-and-Coins and Nimstring are PSPACE-complete | 113
then that player loses. Equivalently, players are forbidden from removing an edge that
would free a coin, and the winner is determined according to normal play.
Now we show that Coins-are-Lava is a particular case of Nimstring. Thus its hard-
ness will imply the hardness of both Nimstring and (by Lemma 1) Strings-and-Coins.
Lemma 3. For every graph G, there exists an efficiently computable graph H such that
the winner of Coins-Are-Lava(G) is the same as the winner of Nimstring(H).
G′ G′ G′ G′ G′ G′
Figure 7.2: (a) The graph H; (b) freeing a coin in G results in a loony position; (c–f) cutting a string
outside G results in a loony position.
Proof. Let H be a graph obtained from G by connecting every coin to the ground with
a long chain (length ≥ 5); see Figure 7.2a.
If a player cuts a string in one of these chains or cuts all strings in G attached to
the same coin, then this creates a loony position and ends their turn; see Figure 7.2. By
Lemma 2 their opponent can then win.
Therefore the players will try to avoid cutting strings outside G or freeing a coin
in G. The first player to fail to do so loses. This goal is equivalent to Coins-Are-Lava(G).
4 PSPACE-hardness
It remains to prove that Coins-Are-Lava(G) is PSPACE-complete. Our reduction is from
the following known PSPACE-complete problem.
114 | E. D. Demaine and Y. Diomidov
Definition 4 (Game-SAT(ℱ )). Given a positive DNF formula ℱ (an or of ands of vari-
ables without negation), Game-SAT(ℱ ) is the following game played by two players,
Trudy and Fallon. Initially, each variable is unset. In each turn the player may set a
variable to true or false, or the player may skip their turn (do nothing). The game
ends when all variables are set; then Trudy wins if formula ℱ is true, whereas Fallon
wins if formula ℱ is false.
We allow players to skip turns and to set variables to the “wrong” value (Trudy
to false or Fallon to true). The player with a winning strategy can always avoid such
moves, however, replacing them with dominating “good” moves that do not skip and
play the “right” value (Trudy to true or Fallon to false), as such moves never hurt the
winning player’s final goal.
Schaefer [9] proved that this game is PSPACE-complete under the name
Gpos (POS DNF).
Proof. Let ℱ be a positive DNF formula with n variables, m clauses, and ki occurrences
of each variable xi . Without loss of generality, every clause contains at least two vari-
ables, and every variable appears in at least one clause. Fix a sufficiently large number
N ≫ m2 n2 .
First, we define several useful gadgets, which will be connected together via
shared coins (merging the output coin of one gadget with the input coin of another
gadget). Many of these gadgets are parameterized by an integer level. Intuitively, do-
ing anything to a level-(ℓ + 1) gadget requires an order of magnitude more time than
doing anything to a level-ℓ gadget. This way we can make sure that players interact
with gadgets in the right order. However, since each level-ℓ gadget uses N θ(ℓ) strings,
we can only use a constant number of levels.
5 :=
A rope (Figure 7.3) is a collection of strings that share both endpoints. The number
of strings in a rope is called its width. We say that a rope has been cut when all of its
strings have been cut. When the game ends, every rope has either been cut completely,
Strings-and-Coins and Nimstring are PSPACE-complete | 115
(a) Initial state (unset). (b) Variable set to true and false, respectively.
or it has only one string remaining. (Otherwise, a string in the rope can always be safely
cut without freeing any coin.)
A variable gadget (Figure 7.4) consists of a chain of two strings, where the bottom
string is connected to the ground, and the top string is connected to an output coin.
We say that is set to false if the bottom string has been cut, set to true if the top string
has been cut, and unset if neither string has been cut. A variable implicitly has level 0.
A level-ℓ wire gadget (Figure 7.5) consists of a chain of two ropes, a width-N 2ℓ−1
bottom rope connected to an input coin and a width-N 2ℓ top rope connected to an out-
put coin. We say that it is disabled if the input rope has been cut and activated if the
top rope has been cut. The HP (Hit Points) of the wire is the number of strings remain-
ing in the bottom rope. Note that activating a wire takes a factor of N more moves than
disabling it. This means that if one player is racing to activate a wire and the other is
racing to disable it, then the disabler will win the race. Intuitively, the only case where
a wire will get activated is if disabling the wire would free a coin.
A level-ℓ clause gadget (Figure 7.6) consists of a single width-N 2ℓ−1 rope connected
to an input coin and the ground. We say that it is disabled if the rope has been cut. The
HP of the clause is the number of strings remaining in the rope.
The winner is determined solely by the parity of the number of removed strings.
We can easily flip this parity, for example, by adding an extra ground-to-ground string.
So without loss of generality, Fallon wins if (but not only if) every variable and wire
has one string remaining and all m clauses have no strings. Then Trudy wins if every
variable and wire has one string remaining, m−1 clauses have no strings, and the final
clause has one string. In fact, we will show that the game has to end in one of these
two specific ways.
Let ℱ ′ be a new formula with the following clauses:
– all clauses from ℱ , which we call real clauses;
– for every variable, a singleton clause containing just that variable; and
– one additional empty clause that contains no variables and is always satisfied.
First, we describe how typical gameplay in G should look (without proofs) to give some
intuition for why this construction makes sense, and then we prove that it works more
formally. Typical gameplay divides into four sequential phases:
1. First, Trudy and Fallon set variable gadgets to true and false respectively.
2. Then the players disable all wires from false variables and disable all but one wire
from each true variable (disabling all wires from a true variable would free a coin).
Then they activate the level-1 wires that have not been disabled (one from each
true variable). Note that almost half the wires from each variable go to singleton
clauses. If all real clauses are false, then those wires form the majority, and Fallon
can ensure that one of them gets activated. However, if even one real clause is
true, then the wires to true clauses (real or singleton) now form a majority, and
Trudy can ensure that one of them gets activated.
3. Then the players disable all but one level-2 wire and activate the remaining level-2
wire (disabling all of them would free a coin). Almost half of these wires go to the
Strings-and-Coins and Nimstring are PSPACE-complete | 117
Figure 7.7: Graph G for formula (x1 ∧ x2 ∧ x3 ) ∨ (x2 ∧ x3 ) ∨ (x3 ∧ x4 ). Clauses are labeled and colored
according to whether they are empty (“empty” and gray, at the top), singleton (“xi ” and green), or
real (“xi ∧ xj ⋅ ⋅ ⋅” and orange). Dotted lines indicate that there are supposed to be ki − 1 wires there,
but ki − 1 = 0.
empty clause. If all real clauses are false, then they form a majority, and Fallon
can ensure that one of them gets activated. However, if even one real clause is true,
then it together with the empty clause forms a majority, and Trudy can ensure that
one of them gets activated.
4. Finally, the players disable the clause gadgets. A clause can be disabled unless all
wires pointing at it got activated (in that case, disabling it would free a coin). If
the formula is not satisfied, then all clauses get disabled, and Fallon wins. If the
formula is satisfied, then exactly one clause remains, and Trudy wins.
We want to show that the winner of Coins-Are-Lava(G) is the same as the winner
of Game-SAT(ℱ ). We do a case split on the winner of Game-SAT(ℱ ) and in each case
provide a winning Coins-Are-Lava(G) strategy for that player.
118 | E. D. Demaine and Y. Diomidov
If Fallon can win Game-SAT(ℱ ), then they can win Coins-Are-Lava(G) using the
following strategy (where numbers match the phases of intended gameplay above):
1. There is a natural mapping f from states of Coins-Are-Lava(G) to states of
Game-SAT(ℱ ): a variable xi in Game-SAT(ℱ ) is set to true if the corresponding
variable gadget is set to true, set to false if the gadget is set to false, and unset if
the gadget is unset. Every move in Coins-Are-Lava(G) maps to a valid move in
Game-SAT(ℱ ), where moves outside of variable gadgets map to skip moves. Also,
if we played Coins-Are-Lava(G) for less than 2n moves, then we can perform any
move that is valid in the corresponding Game-SAT(ℱ ) state. This does not free a
coin, because the relevant coin has degree Ω(N) ≫ 2n. So we can transfer the
strategy from Game-SAT(ℱ ) to Coins-Are-Lava(G): for every opponent’s move in
Coins-Are-Lava(G), map it to Game-SAT(ℱ ), find the best response, and map it
back to Coins-Are-Lava(G). We remain in this phase until we have set all variable
gadgets to some assignment that does not satisfy ℱ , as guaranteed by the winning
strategy in Game-SAT(ℱ ).
2. Call a level-1 wire from a true variable xi good if it points at a real clause and bad
if it points at a singleton clause. Wires from false variables are neutral. Each true
variable xi has ki good wires and ki − 1 bad ones. For each true variable xi , the total
HP of bad wires is still at most (ki − 1)N 1 , and total HP of all good wires is at least
ki N 1 − O(n) > (ki − 1)N 1 (the opponent could cut up to O(n) strings here while we
were setting variables).
(a) Disable all bad wires, Specifically, if the opponent reduced HP of a good wire
connected to some true variable xi , then we respond by reducing HP of a bad
wire connected to the same xi ; if the opponent did something else or xi has
no bad wires left, then we reduce HP of a bad wire connected an arbitrary
variable xj . This maintains the invariant that for each true variable xi , HP of
good wires of xi is higher than HP of bad wires of xi . The opponent cannot
activate any bad wires because that would take Θ(N 2 ) ≫ ∑i ki N 1 moves.
(b) Disable good and neutral level-1 wires until there is only one good wire re-
maining per true variable. Once again, the opponent cannot activate these
wires because that would take too many moves.
(c) Activate the remaining good wires. Opponent cannot disable these wires, be-
cause that would free a coin.
(d) We have activated exactly one good wire per true variable. There are no acti-
vated level-1 wires pointing at satisfied clauses, because real clauses are un-
satisfied and singleton clauses are bad.
3. Call a level-2 wire good if it points to a real or singleton clause and bad if it points
to the empty clause. The total HP of the n + m − 1 bad wires is at most (n + m − 1)N 3 ,
and the total HP of the n + m good wires is still (after O(nmN 2 ) moves spent in the
first two stages) at least (n + m)N 3 − O(nmN 2 ) > (n + m − 1)N 3 .
(a) Disable all bad wires. The opponent cannot disable all good wires before we
disable the bad ones because good wires have more HP.
Strings-and-Coins and Nimstring are PSPACE-complete | 119
(b) Disable all but one good wire. Because all disabling and activating steps done
so far are for wires of HP Θ(N 3 ) and activating a level-2 wire requires Θ(N 4 )
moves, the opponent cannot afford to activate any of these good wires before
we disable them.
(c) Activate the last good wire. The opponent cannot disable it because that
would free the root coin.
4. Disable all clause gadgets. This will not free a coin, because every clause has
at least one disabled wire: real clauses are unsatisfied, so there is a false vari-
able whose adjacent wire we disabled in Step 2(b); singleton clauses have bad
level-1 wires that we disabled in Step 2(a); and the empty clause has a bad level-2
wire that we disabled in Step 3(a). We win because there are no clause gadgets
remaining.
If Trudy can win Game-SAT(ℱ ), then they can win Coins-Are-Lava(G) using the fol-
lowing strategy (where numbers match the phases of intended gameplay above):
1. Set the variable gadgets to some assignment that satisfies ℱ . Let C ∈ ℱ be a satis-
fied clause.
2. Call a level-1 wire from a true variable xi ∈ C good if it points at a singleton clause
or C and bad otherwise. Wires from variables not in C are neutral. Each variable
xi ∈ C has ki − 1 bad wires (ki to real clauses, but one of them is C) and ki good ones
(ki − 1 to the singleton clause and one to C). Disable all bad wires, then disable
all neutral wires and all-but-one good wire per variable in C, and then activate
the remaining good wires. Each activated wire points to a singleton clause or to C.
Then either all of them point to C, or at least one of them points to a singleton
clause. Either way, we have some satisfied real or singleton clause C ′ with only
activated level-1 wires.
3. Call a level-2 wire good if it points to C ′ or to the empty clause. There are n + m − 1
bad wires (n + m to real clauses, but one of them is C ′ ) and n + m bad ones (n + m − 1
to the empty clause plus one to C ′ ). Disable all bad wires and activate exactly one
good wire. Let C ′′ be the clause pointed by the activated wire (either C ′ or the
empty clause).
4. Disable all clause gadgets other than C ′′ . This will not free a coin, because every
clause other than C ′′ has a disabled level-2 wire. However, C ′′ cannot be disabled,
because all of the wires pointing at it have been activated. We win because there
is exactly one clause gadget remaining.
5 Open problems
We have proved the PSPACE-completeness of Strings-and-Coins and Nimstring on
multigraphs, whereas Buchin et al. [4] proved the PSPACE-completeness of Dots-and-
Boxes and thus Strings-and-Coins on grid graphs. The main open problem is whether
Dots-and-Boxes with normal play instead of scoring, i. e., Nimstring on grid graphs,
is also PSPACE-complete. Toward this goal, we could also aim to prove the PSPACE-
completeness of Nimstring on simple graphs (with only one copy of each edge/string)
or planar graphs.
Bibliography
[1] American Mathematical Society, Elwyn Berlekamp Gives Arnold Ross Lecture, http://www.ams.
org/programs/students/arl2004, 2003.
[2] E. Berlekamp, J. Conway, and R. Guy, Winning Ways for Your Mathematical Plays, Volume 3, A K
Peters, Wellesley MA, 2003.
[3] E. Berlekamp, The Dots and Boxes Game: Sophisticated Child’s Play, A K Peters,
Massachusetts, 2000.
[4] K. Buchin, M. Hagedoorn, I. Kostitsyna, and M. van Mulken, Dots & boxes is PSPACE-complete,
2105.02837, 2021.
[5] S. Collette, E. Demaine, M. Demaine, and S. Langerman, Narrow misère dots-and-boxes, Games
of No Chance 4, Cambridge University Press, 2015.
[6] E. Demaine and R. Hearn, Playing games with algorithms: algorithmic combinatorial game
theory, Games of No Chance 3, Cambridge University Press, 2009.
[7] D. Eppstein, Computational Complexity of Games and Puzzles, http://www.ics.uci.edu/
~eppstein/cgt/hard.html.
[8] R. Guy and R. Nowakowski, Unsolved problems in combinatorial games, in More Games of No
Chance, Cambridge University Press, 2002.
[9] T. Schaefer, On the complexity of some two-person perfect-information games, J. Comput.
System Sci. 16(1978), 185–225.
[10] United States Mint, The United States government to sell the famed 1933 double eagle,
the most valuable gold coin in the world, https://www.usmint.gov/news/press-releases/
20020207-the-united-states-government-to-sell-the-famed-1933-double-eagle-the-most-
valuable-gold-coin-in-the-world, 2002.
Eric Duchêne, Marc Heinrich, Richard Nowakowski, and
Aline Parreau
Partizan subtraction games
Abstract: Partizan subtraction games are combinatorial games where two players, say
Left and Right, alternately remove a number n of tokens from a heap of tokens, with
n ∈ Sℒ (resp., n ∈ Sℛ ) when it is Left’s (resp., Right’s) turn. The first player unable to
move loses. These games were introduced by Fraenkel and Kotzig in 1987, where they
introduced the notion of dominance, i. e., an asymptotic behavior of the outcome se-
quence where Left always wins if the heap is sufficiently large. In the current paper,
we investigate other kinds of behaviors for the outcome sequence. In addition to dom-
inance, three other disjoint behaviors are defined, weak dominance, fairness, and ul-
timate impartiality. We consider the problem of computing this behavior with respect
to Sℒ and Sℛ , which is connected to the well-known Frobenius coin problem. General
results are given, together with arithmetic and geometric characterizations when the
sets Sℒ and Sℛ have size at most 2.
1 Introduction
Partizan subtraction games were introduced by Fraenkel and Kotzig [2] in 1987. They
are two-player combinatorial games played on a heap of tokens. Each player is as-
signed a finite set of integers, denoted Sℒ (for the Left player), and Sℛ (for the Right
player). A move consists in removing a number m of tokens from the heap, provided
that m belongs to the set of the player. The first player unable to move loses. When
Sℒ = Sℛ , the game is impartial and known as the standard subtraction game; see [1].
We now recall the useful notations and definitions coming from combinatorial
game theory. More information can be found in the reference book [7]. There are two
basic outcome functions: for a position g,
Eric Duchêne, LIRIS UMR CNRS, Université Lyon 1, Lyon, France, e-mail: eric.duchene@univ-lyon1.fr
Marc Heinrich, University of Leeds, School of Computing, Leeds, United Kingdom, e-mail:
marc.heinrich@free.fr
Richard Nowakowski, Department of Mathematics and Statistics, Dalhousie University, Halifax, Nova
Scotia, Canada, e-mail: R.Nowakowski@Dal.Ca
Aline Parreau, LIRIS UMR CNRS, Université Lyon 1, Lyon, France, e-mail: aline.parreau@univ-lyon1.fr
https://doi.org/10.1515/9783110755411-008
122 | E. Duchêne et al.
and
It is usual to talk of the outcome of a position g and the associated outcome func-
tion o(g),
– For oL (g) = oR (g) = L, Left wins regardless of who moves first, written o(g) = ℒ.
– For oL (g) = oR (g) = R, Right wins regardless of who moves first, o(g) = ℛ.
– For oL (g) = L, oR (g) = R, the player who starts has a winning strategy, o(g) = 𝒩 .
– For oL (g) = R, oR (g) = L, the second player has a winning strategy, o(g) = 𝒫 .
In the outcome function, there should be a reference to the game/rules. In this paper,
the position will be a number, but the rules will be clear from the context, so the rules
will not be included in the function.
A partizan subtraction game G with rules (Sℒ , Sℛ ) will be denoted (Sℒ , Sℛ ) in the
rest of the paper. A game position of G will be simply denoted by an integer n cor-
responding to the size of the heap. The outcome sequence of G is the sequence of the
outcomes for n = 0, 1, 2, 3, . . . , i. e., o(0), o(1), o(2), . . . . A well-known result ensures that
the outcome sequence of any impartial subtraction game is ultimately periodic [7].
Note that in that case the outcomes only have the values 𝒫 and 𝒩 since the game is
impartial. In [2], this result is extended to partizan subtraction games.
Theorem 1 (Fraenkel and Kotzig [2]). The outcome sequence of any partizan subtrac-
tion game is ultimately periodic.
Example 2. Consider the partizan subtraction game G = ({1, 2}, {1, 3}). The outcome
sequence of G is
𝒫 𝒩 ℒ 𝒩 ℒ ℒ ℒ ℒ ⋅⋅⋅.
In this particular case the periodicity of the sequence can be easily proved by showing
by induction that the outcome is ℒ for n ≥ 4.
Such a behavior where the outcome sequence has period 1 is rather frequent
for partizan subtraction games. In that case the period is only ℒ or ℛ. In their pa-
per, Fraenkel and Kotzig called this property dominance. More precisely, we say that
Sℒ ≻ Sℛ , or that Sℒ dominates Sℛ , if there exists an integer n0 such that the outcome
of the game (Sℒ , Sℛ ) is always ℒ for all n ≥ n0 . By symmetry a game satisfying Sℒ ≺ Sℛ
is always ℛ for all sufficiently large heap sizes. When a game satisfies neither Sℒ ≻ Sℛ
nor Sℒ ≺ Sℛ , the sets Sℒ and Sℛ are said to be incomparable, denoted by Sℒ ‖Sℛ . In
[2], several instances have been proved to satisfy the dominance property (i. e., the
games ({1, 2m}, {1, 2n + 1}), and ({1, 2m}, {1, 2n})) or to be incomparable like ({a}, {b}). It
is also shown that the dominance relation is not transitive. Note that in [5] the game
Partizan subtraction games | 123
values (i. e., a refinement of the outcome notion) have been computed for the games
({1, 2}, {1, k}).
In the literature, partizan taking and breaking games have not been so much
considered. A more general version, where it is also allowed to split the heap into
two heaps, was introduced by Fraenkel and Kotzig [2] and is known as partizan octal
games. A particular case of such games, called partizan splittles, was considered in [4],
where, in addition, Sℒ are Sℛ are allowed to be infinite sets. Another variation with
infinite sets is when Sℒ and Sℛ make a partition of ℕ [3]. In such cases the ultimate
periodicity of the outcome sequence is not necessarily preserved.
In the current paper, we propose a refinement of the structure of the outcome se-
quence for partizan subtraction games. More precisely, when the sets Sℒ and Sℛ are in-
comparable, different kinds of periodicity can occur. The following definition presents
their classification.
Remark 4. Note that inside a period, not all the combinations of 𝒫 , 𝒩 , ℒ, and ℛ are
possible. For example, in a game that is not 𝒰ℐ , a period that includes 𝒫 must in-
clude 𝒩 . Indeed, assume on the contrary that it is not the case and let n be a position
of outcome 𝒫 in the period, where the period has length p. Let a ∈ Sℒ . Now the position
n + a is in the period, and o(n + a) = ℒ since Left can win going first and, by assump-
tion, o(n + a) ≠ 𝒩 . For the same reason, o(n + 2a) = ℒ. By repeating this argument,
o(n + ka) = ℒ for all k. Since n is in the period, we now have 𝒫 = o(n) = o(n + pa) = ℒ,
a contradiction.
Whereas the literature detailed above give examples of 𝒮𝒟 and 𝒰ℐ games (as im-
partial subtraction games are 𝒰ℐ ), we will see later in this paper examples of 𝒲𝒟
games (e. g., Lemma 15). We now give an example of a fair game.
Example 5. Let Sℒ = {c, c + 1} and Sℛ = {1, b} with b = c(c + 1) and c > 1. Then the
game (Sℒ , Sℛ ) is ℱ .
Proof. We proceed by induction on the size of the heap to show that there are infinitely
many ℒ and ℛ. Since c > 1, we have o(1) = ℛ and o(c + 1) = ℒ. Now we assume that
for some n, o(n) = ℒ, and show that o(n + b + c) = ℒ. In the position n + b + c, Left
considers these as the two heaps n and b + c, and if Right removes 1, then Left regards
this as a move in the n component, else it is a move in the second heap. Left moving
124 | E. Duchêne et al.
first applies her winning strategy on n and then, regardless of whether Left moved
first or second, responds in the remnants of n heap whenever Right removes 1 token.
If at some point, Right chooses to remove b tokens, then Left answers immediately by
removing c tokens, eliminating the second heap. In that case, Left wins at the end by
applying her winning strategy on n. On the contrary, if Right never plays b, then Left
empties the n component, and it is Right’s turn from the b + c position. Again, from
b + c, playing b is a losing move for Right. If he plays 1, then Left plays c + 1, leading
to the position b − 2 = (c − 1)(c + 2). All the next legal moves of Right are 1, and all the
answers of Left are c + 1, which guarantees to empty the position and hence win the
game.
Assume now that o(n) = ℛ, and we show that o(n+b+c) = ℛ. As previously, Right
considers this position as the two heaps n and b + c. He applies his winning strategy
on n, and any move c of Left leads Right to answer by removing b tokens, leaving a
winning position for Right. Hence assume that Left plays c + 1 until Right wins on n.
At this point, Left has to play from a position k + b + c with k < c. If k = 0, then Left
loses for the same reasons as in the above case (as the position b + c is 𝒫 ). Otherwise,
any move c or c + 1 of Left is followed by a move b of Right, leading to a position with
at most k tokens, from which Left cannot play and loses.
The paper is organized as follows. In Section 2, we consider the two decision prob-
lems related to the computation of the outcome of a game position and of the behavior
of the outcome sequence. Links with the Frobenius coin problem and the knapsack
problem are given. Then we try to characterize the behavior of the outcome sequence
(𝒮𝒟, 𝒲𝒟, ℱ or 𝒰ℐ ) according to Sℒ and Sℛ . When Sℒ is fixed, Section 3 gives gen-
eral results about strong and weak dominance according to the size of Sℛ . In Sec-
tions 4 and 5, we characterize the behavior of the outcome sequence when |Sℛ | = 1
and |Sℒ | ≤ 2. Section 5 is devoted to the case |Sℒ | = |Sℛ | = 2, where it is proved that
the sequence is mostly strongly dominating.
2 Complexity
Computing the outcome of a game position is a natural question when studying combi-
natorial games. For partizan subtraction games, we know that the outcome sequence
is eventually periodic. This implies that if Sℒ and Sℛ are fixed, then computing the
outcome of a given position n can be done in polynomial time. However, if the sub-
traction sets are part of the input, then the algorithmic complexity of the problem is
not so clear. This problem can be expressed as follows:
psg outcome
Input: two sets of integers Sℒ and Sℛ , a game position n
Output: the outcome of n for the game (Sℒ , Sℛ )
Partizan subtraction games | 125
Theorem 6. psg outcome is NP-hard, even in the case where the set of one of the players
is reduced to one element.
The second question that emerged from partizan subtraction games is the behav-
ior of the outcome sequence according to Definition 3. It can also be formulated as a
decision problem.
psg sequence
Input: two sets of integers Sℒ and Sℛ
Output: is the game (Sℒ , Sℛ ) 𝒮𝒟, 𝒲𝒟 (and not 𝒮𝒟), ℱ , or 𝒰ℐ ?
Unlike psg outcome, the algorithmic complexity is open for psg sequence. In the
next sections, we consider this problem for some particular cases. In addition, we can
126 | E. Duchêne et al.
wonder whether the knowledge of the sequence could help to compute the outcome
of a game position. The answer is no, even if the game is 𝒮𝒟:
The proof is based on the well-known coin problem (also called Frobenius prob-
lem).
coin problem
Input: a set of n positive integers a1 , . . . , an such that gcd(a1 , . . . , an ) = 1
Output: the largest integer that cannot be expressed as a linear combination of
a1 , . . . , an .
This value is called the Frobenius number. For n = 2, the Frobenius number equals
a1 a2 − a1 − a2 [8].1 No explicit formula is known for larger values of n. Moreover, the
problem has been proved to be NP-hard in the general case [6].
Proof of Proposition 8. Under the assumptions of the proposition, we will show that
the length of the preperiod is exactly the Frobenius number of {a1 +1, . . . , an +1}. Indeed,
let N be the Frobenius number of {a1 + 1, . . . , an + 1}. Then N + 1, N + 2, . . . can be written
as linear combinations of {a1 + 1, . . . , an + 1}. Note that in the game (Sℒ , Sℛ ), any round
(sequence of two moves) can be seen as a linear combination of {a1 + 1, . . . , an + 1}, as
Left plays ai and Right plays 1. Hence if Right starts from N + 1, then Left follows the
linear combination for N + 1 to choose her moves, so as to play an even number of
moves until the heap is empty. For the same reasons, if Right starts from N + 2, then
Left has a winning strategy as a second player. Since Right’s first move is necessarily
1, this means that Left has a winning strategy as a first player from N + 1. Thus the
position satisfies o(N + 1) = ℒ. Using the same arguments, this remains true for all
positions greater than N + 1. In other words, it proves that the game is 𝒮𝒟 for Left.
Now we consider the position N and show that o(N) ≠ ℒ. Indeed, assume that Right
starts and Left has a winning strategy. This means that an even number of moves will
be played. According to the previous remark, the sequence of moves that is winning
for Left is necessarily a linear combination of {a1 + 1, . . . , an + 1}. This contradicts the
Frobenius property of N.
This correlation between partizan subtraction games and the coin problem will
be reused further in this paper.
1 Although not germane to this paper, Sylvester’s solution is central to the strategy stealing argument
that proves that naming a prime 5 or greater is a winning move in sylver coinage [1, Ch. 18].
Partizan subtraction games | 127
3 When Sℒ is fixed
In this section, we consider the case where Sℒ is fixed and study the behavior of the
sequence as Sℛ varies. In particular, we look for sets Sℛ that make the game (Sℒ , Sℛ )
favorable for Right. This can be seen as a prelude to the game where players would
choose their sets before playing: if Left has chosen her set Sℒ , can Right force the game
to be asymptotically more favorable for him?
Theorem 9. Let Sℒ be any finite set of integers. Let p be the period of the impartial sub-
traction game played with Sℒ , and let Sℛ = Sℒ ∪ {p}. Then Right strongly dominates the
game (Sℒ , Sℛ ), i. e., the game (Sℒ , Sℛ ) is ultimately ℛ.
Proof. Let n0 be the preperiod of the impartial subtraction game played on Sℒ , and m
let be the maximal value of Sℒ . We prove that Right wins if he starts on any heap of
size n > n0 + p, which implies that the outcome on (Sℒ , Sℛ ) is ℛ for any heap of size
n > n0 + p + m.
If n is an 𝒩 -position for the impartial subtraction game on Sℒ , then Right follows
the strategy for the first player, never uses the value p, and wins.
If n is a 𝒫 -position, Right takes p tokens, leaving Left with a heap of size n−p > n0 ,
which is, using periodicity, also a 𝒫 -position in the impartial game. After Left’s move,
we are in the case of the previous paragraph and Right wins.
Note that in the previous theorem, Sℛ contains the set Sℒ and thus has a large
common intersection. We prove in the next theorem that if Sℛ cannot contain any
value in Sℒ , then it is still possible to have a game that is at least fair for Right (i. e., it
contains an infinite number of ℛ-positions). Note that we do not know if for any set
Sℒ , there is always a set Sℛ with |Sℛ | = |Sℒ | + 1 and Sℛ ∩ Sℒ = 0 that is (weakly or
strongly) dominating for Right.
Theorem 10. For any set Sℒ , there exists a set Sℛ with Sℒ ∩ Sℛ = 0 and |Sℛ | = |Sℒ | + 1
such that the resulting game contains an infinite number of ℛ-positions.
Proof. Let n be any integer such that the set A = {n − m, m ∈ Sℒ } is a set of positive
integers that is disjoint from Sℒ . Putting Sℛ = A ∪ {n} gives a set that satisfies the
condition of the theorem and the game (Sℒ , Sℛ ).
We claim that o(kn) = ℛ for k = 1, 2, . . . . If Left starts on a position kn by removing
m tokens, then Right can answer by taking n − m tokens and leaves (k − 1)n tokens, and
by induction Right wins. If Right starts, then he takes n tokens, and, again, Left has a
multiple of n and loses.
128 | E. Duchêne et al.
Consequently, if Right has a small advantage on the size of the set, then he can
ensure that the sequence of outcomes contains an infinite number of ℛ-positions. So
having a larger subtraction set seems to be an important advantage. However, having a
larger set is not always enough to guarantee dominance. Indeed, we have the following
result.
Theorem 11. Let G = (SL , SR ) be a partizan subtraction game. Assume that |SL | ≥ 2 and
that G is eventually L with preperiod at most p. Let x1 , x2 ∈ SL with x1 < x2 , and let d be
an integer with d > p + max(SR ∪ {x2 − x1 }). Then Gd = (SL , SR ∪ {d}) is eventually L with
preperiod at most (d + x2 )⌈ xd+x
−x
2
⌉
2 1
Proof. Let G, d, x1 , and x2 be as in the statement of the theorem. We first prove the
following claim.
Claim 12. In the game Gd , if oL (n) = ℒ (resp., oR (n) = ℒ), then Left has a winning strat-
egy on n + (d + x) as the first (resp., second) player with x ∈ SL .
Suppose now that Left wins playing first on n, and let y ∈ SL be a winning move for
Left. Then Left wins playing second on n − y, and using the induction hypothesis, she
wins playing second on n − y + d + x. Consequently, y is a winning move for Left on
n + d + x.
For i ≥ 0, denote by Xi the set of integers k < d+x2 such that the position i(d+x2 )+k
is ℒ for Gd . To prove the theorem, it suffices to show that if i is large enough, then
Xi = [0, x2 + d[. From the claim above we know that Xi ⊆ Xi+1 .
Additionally, using the hypothesis on d, we have that [p + 1, d − 1] ⊆ X0 . Finally, we
have the following property. For any x ≥ 0, if x ∈ Xi , then x − (x2 − x1 ) mod (d + x2 ) ∈
Xi+1 . Indeed, if x ∈ Xi , then i(d + x2 ) + x is an ℒ-position, and using the claim above, so
is i(d + x2 ) + x + d + x1 = (i + 1)(d + x2 ) + x − (x2 − x1 ).
Partizan subtraction games | 129
x = (d − β) − α(x2 − x1 ) mod (d + x2 ).
Applying iteratively Theorem 11 with a game that is 𝒮𝒟 for Left (like the game of
Example 2), we obtain the following corollary.
Corollary 13. There are sets Sℒ and Sℛ with |Sℒ | = 2 and |Sℛ | arbitrarily large such that
(Sℒ , Sℛ ) is 𝒮𝒟 for Left.
Proof.
1. In this case, the game is purely periodic with period 𝒫ℒc 𝒩 k . This can be proved
by induction on the size of the heap n. If 0 < n ≤ c, then only Left can play, and
the game is trivially ℒ. Otherwise, let x = n mod c + k + 1. If x = 0, then if the first
player removes i tokens, then the second player answers by removing c + k + 1 − i
tokens, leading to the position n − c − k − 1, which is 𝒫 by induction, and so is n.
If 0 < x < c + 1, then when Left starts, she takes one token, leading to an ℒ- or
𝒫 -position, and wins. If she is second, then she plays as before to n − c − k − 1,
which is an ℒ-position. Finally, if x ≥ c + 1, then both players win playing first by
playing x − c for Left and x for Right.
130 | E. Duchêne et al.
2. We show that if n > 0 is such that Right wins playing second on n, then this implies
that Sℛ contains k consecutive integers. Let n0 be the smallest positive integer
such that oL (no ) = R. We know that n0 > k, since otherwise Left can win playing
first by playing to zero. Since Right has a winning strategy playing second, he has a
winning first move on all the positions n−i for 1 ≤ i ≤ k. This means that for each of
these positions, Right has a winning move to some position mi where oL (mi ) = R.
By the minimality of n0 this implies that mi = 0, and consequently n − i ∈ Sℛ
for all 1 ≤ i ≤ k. Consequently, if Sℛ does not contain k consecutive integers, then
there is no position n > 0 such that Right wins playing second. In particular, there
are neither ℛ- nor 𝒫 -positions in the period. By Remark 4 this implies that the
period only contains ℒ-positions, meaning that the game is strongly dominating
for Left.
The set Sℒ = {1, . . . , k} is somehow optimal for Left, since the exceptions of strongly
domination for Left in the previous lemma appear for any set of k elements:
Lemma 16. For any set Sℒ , there is a set Sℛ with |Sℛ | = |Sℒ | and Sℛ ∩ Sℒ = 0 such that
Left does not strongly dominate.
Proof. Let Sℛ = n0 − Sℒ for an integer n0 larger than all the values of Sℒ and such that
Sℛ ∩ Sℒ = 0. Then Right wins playing second in all the multiples of n0 .
Lemma 17. Let SL = {a} and SR = {b} with a < b. The outcome sequence of S = (SL , SR )
is purely periodic, the period length is a + b, and the period is 𝒫 a ℒb−a 𝒩 a . In particular,
the game is weakly dominating for Left.
Proof. We prove that for all n ≥ 0, if one of the players has a winning move playing
first (resp., second) on n, then he also has one playing first (resp., second) on n + a + b.
Indeed, suppose, for example, that Left has a winning move on position n playing first
Partizan subtraction games | 131
(the other cases are treated in the same way). If Left plays first on position n + a + b,
then after two moves, it is again Left’s turn to play, and the position is now n, and Left
wins the game.
The result then follows from computing the outcome of the positions n ≤ a + b.
These outcomes are tabulated in Table 8.1.
Table 8.1: Outcomes with SL = {a} and SR = {b} for first values.
Theorem 18. Let a, b, and c be three positive integers, and let g = gcd(a + c, b + c). The
game ({a, b}, {c}) is:
– strongly dominated by Left if g ≤ c,
– weakly dominated by Left with period (𝒫 g−c ℒ2c−g 𝒩 g−c ) if c < g < 2c,
– ultimately impartial with period (𝒫 c 𝒩 c ) if g = 2c,
– weakly dominated by Right with period (𝒫 c ℛg−2c 𝒩 c ) if g > 2c.
Proof. After both players play once, the number of tokens decreases by either a + c
or b + c, depending on which move Left played. By the results on the coin problem
we know that if q is large enough, then qg can be written as α(a + c) + β(b + c) with
nonnegative integers α and β. If Left plays second, then a strategy can be to play a α
times and b β times. After these moves, it is Right’s turn to play, and the position is
r < c. Consequently, Right now has no move and loses the game.
We will now use this claim to prove the result in four different cases.
For the first case, we have g ≤ c. For any integer n, we have (n mod g) < g ≤ c.
Consequently, by Claim 19 there is an integer n0 such that for any n ≥ n0 , oR (n) = L.
This also implies that for any n ≥ n0 + a, 0L (n) = L since she plays to n − a > n0 , and,
by the claim, oR (n − a) = L. Thus the outcome is ℒ for any position n large enough.
132 | E. Duchêne et al.
For the three remaining cases, we will show that the following four properties hold
when n is large enough. The result of the theorem immediately follows from these four
properties.
1. if r < c, then Left wins playing second,
2. if r ≥ g − c, then Left wins playing first,
3. if r ≥ c, then Right wins playing first,
4. if r < g − c, then Right wins playing second.
When c > b and b ≥ 2a, which is included in the first case, we know the whole
outcome sequence. This will be useful in the next section.
Theorem 20. The outcome sequence of the game ({a, b}, {c}) with c > b and b ≥ 2a is
the following:
a c−a a ∞
𝒫 ℒ 𝒩 ℒ .
Figure 8.1: Properties of the outcome sequences for G = ({a, a + k}, {c, d}). The parameters a and k
are fixed, and the pictures are obtained by varying the parameters c and d. The point at coordinate
(c, d) is blue if the corresponding game is eventually ℒ, red if it is eventually ℛ, and green if there is
a mixed period.
5.1 Case b ≥ 2a
We start by the case where b ≥ a and show that in this case, G is ultimately ℒ if (c, d)
is far enough from the diagonal.
134 | E. Duchêne et al.
a c−a a d−c−a a ∞
𝒫 ℒ 𝒩 ℒ 𝒩 ℒ .
Proof. Again, we will show this result by induction on n, the starting position of the
game. Let G′ be the game ({a, a + k}, {c}). If n < d, then G played on n has the same
outcome as G′ , since playing d is not a valid move for Right in this case. Consequently,
we can just apply Theorem 20, and get the desired result. Otherwise, there are two
possible cases:
– If d ≤ n < d + a, then Right has a winning move to the position n − d < a, and
Left has a winning move by playing his strategy for the game G′ on n. Indeed, this
leads to a position n − x < d for some x ∈ {a, a + k} with outcome either 𝒫 or ℒ for
G′ and, consequently, also for G, since d cannot be played anymore at this point.
– If n ≥ d + a, then denote by I1 and I2 the two intervals containing the 𝒩 -position,
i. e., I1 = [c, c + a[, and I2 = [d, d + a[. Since k ≥ a, we cannot have that n − a and
n − a − k are both in I1 or both in I2 . Additionally, since d > c + a + k, we cannot
have both n − a − k ∈ I1 and n − a ∈ I2 at the same time. Consequently, one of n − a
and n − a − k has outcome either ℒ or 𝒫 , and Left has a winning move on n.
Definition 22. Given an integer a and a real number α ≥ 1, we denote by Ta,α the set
of points defined as follows:
– T0,α = {(c, d) : gcd(c, d) ≥ max(c,d)
α
};
– for a ≥ 1, Ta,α is obtained from T0,α by a translation of vector (−a, −a).
We can remark that for any α and β with β ≥ α, we have T0,α ⊆ T0,β . We now prove
some properties of the sets Ta,α , which will be useful for the proofs later on.
Lemma 23. Assume that there are some positive integers x, y, u, and v such that xu−yv =
0 with (u, v) ≠ (0, 0). Then (x, y) ∈ T0,max(u,v) .
Proof. Up to dividing u and v by gcd(u, v), we can assume that u and v are coprime.
Then the equation is xu = yv. Consequently, u is a divisor of yv, and since u and v are
coprimes, this means that u is a divisor of v. We can write y = gu, and, consequently,
we have xu = yv = vgu. This means that x = vg and g = gcd(x, y). Consequently,
max(x,y)
gcd(x,y)
= max(u, v), and (x, y) ∈ T0,max(u,v) .
Partizan subtraction games | 135
Given two points p = (x, y) and p′ = (x′ , y′ ), we denote by d(p, p′ ) the distance
between these two points according to the 1-norm: d(p, p′ ) = |x − x ′ | + |y − y′ |. If 𝒟 is
a subset of ℕ2 , then we denote by d(p, 𝒟) = min{d(p, p′′ ), p′′ ∈ 𝒟} the distance of the
point p to the set 𝒟.
Lemma 24. Let x, y, u, v, and a be positive integers such that |xu − yv| ≤ a. Then
d((x, y), T0,max(u,v) ) ≤ a(u + v).
Proof. Let r = xu − yv with |r| ≤ a, and let g = gcd(u, v). By definition, r is a multiple
of g, and we can write r = qg for some integer q. Additionally, by Bézout’s identity
we know that there exist two integers u′ and v′ such that uu′ + vv′ = g, |u′ | ≤ u, and
|v′ | ≤ v. Consider the point (x′ , y′ ) with x′ = x − qu′ and y′ = y + qv′ . We have the
following:
x′ u − y′ v = xu + yv − q(uu′ + vv′ ) = r − qg = 0.
By Lemma 23 we know that (x′ , y′ ) ∈ T0,max(u,v) . Additionally, d((x, y), (x′ , y′ )) = |qu′ | +
|qv′ | ≤ |r|(u + v) ≤ a(u + v). This proves the lemma.
For any a and α, the set Ta,α satisfies the following properties.
Lemma 25. For any a and α, the set Ta,α is the union of a finite set of lines.
Proof. Since Ta,α can be obtained from T0,α by a translation, we only need to prove the
result in the case a = 0. Let 𝒟 be the union of the lines with equation xu − yv = 0 for
all u, v ≤ α. The set 𝒟 is the union of a finite number of lines. By Lemma 23 we know
that 𝒟 ⊆ T0,α . Reciprocally, let (x, y) be a point in T0,α , and let g = gcd(x, y). We can
write x = x′ g and y = y′ g for some integers x′ and y′ . We have the following:
xy′ − yx′ = x′ y′ g − y′ gx = 0.
Additionally, we have x′ = x
g
≤ x max(x,y)
α
≤ α, and similarly for y′ . Consequently, (x, y) ∈
𝒟, and T0,α = 𝒟.
The goal in the remaining of this section is to prove the following theorem.
a
Theorem 26. Let a, b, c, and d be positive integers, and let A = ⌈ b−a ⌉ + 1. Assume that
d((c, d), Ta,A ) ≥ 2A(a + 2b). Then the partizan subtraction game with SL = {a, b}, and
SR = {c, d} is ultimately ℒ.
where
– αi,j = i(d + b) + j(c + b),
– and βi,j = αi,j − b.
136 | E. Duchêne et al.
statement of the theorem, the set I 𝒩 is the set of 𝒩 -positions, I 𝒫 the set of 𝒫 -positions,
and all the other positions have outcome ℒ. In particular, since both I 𝒫 and I 𝒩 are
finite, this implies that the outcome sequence is eventually ℒ. Before showing this,
we prove that under the conditions of the theorem, the intervals Ii,j 𝒫
and Ii,j
𝒩
satisfy the
following properties.
a
Lemma 27. Fix the parameters a and b, and let A = ⌈ b−a ⌉ + 1. Assume that c and d are
such that d((c, d), Tb,A ) ≥ 2A(a + 2b). Then the intervals Ii,j and Ii,j
𝒩 𝒫
satisfy the following
properties:
(i) they are pairwise disjoint,
(ii) there is no interval Ii𝒫′ ,j′ or Ii𝒩
′ ,j′ intersecting any of the b positions preceding Ii,j ,
𝒩
(iii) Ii,j
𝒫
+ c = Ii,j+1
𝒩
,
(iv) Ii,j + d = Ii+1,j ,
𝒫 𝒩
(v) (Ii,j𝒩
+ a) ∩ (Ii,j
𝒩
+ b) = Ii,j
𝒫
.
Proof. The points (iii), (iv), and (v) are just consequences of the definitions of Ii,j 𝒫
and Ii,j
𝒩
. Consequently, we only need to prove the two other points.
a
We know that Ii,j
𝒩
and Ii,j
𝒫
are empty when i + j ≥ ⌈ b−a ⌉ + 1 = A. Consequently, we
will further assume that the indices i, j, i , and j are all upper bounded by A. We first
′ ′
show the following claim. The rest of the proof simply consists in applying this claim
several times.
Claim 28. Assume that there are an integer B and indices i, j, i′ , j′ ≤ A such that one of
the following holds:
– |αi,j − αi′ ,j′ | ≤ B,
– |βi,j − βi′ ,j′ | ≤ B,
– |αi,j − βi′ ,j′ | ≤ B.
Then in all three cases, we have d((c, d), Tb,A ) ≤ 2A(B + b).
Proof. The first two cases are equivalent to the inequality |(i − i′ )(d + b) + (j − j′ )(c +
b)| ≤ B, and the result follows by applying Lemma 24. The third case is equivalent
to |(i − i′ )(d + b) + (j − j′ )(c + b) + b| ≤ B. Using the triangle inequality, this implies
|(i − i′ )(d + b) + (j − j′ )(c + b)| ≤ B + b, and the result follows from Lemma 24.
We will prove the points (i) and (ii) by proving their contrapositives. In other
words, assuming that one of these two conditions does not hold, we want to show
that d((c, d), Tb,α ) ≤ 2α(a + b).
We first consider the point (i). First, assume that there are two intersecting inter-
vals Ii,j
𝒫
and Ii𝒫′ ,j′ . Then the Left endpoint of one of these two intervals is contained in
Partizan subtraction games | 137
the other. Without loss of generality, we can assume that αi,j ∈ Ii𝒫′ ,j′ . This implies that
of generality that βi,j ∈ Ii𝒩 ′ ,j′ , and, consequently, 0 ≤ βi,j − βi′ ,j′ ≤ a − (i + j − 1)k ≤ a.
Again, using Claim 28, this implies d((c, d), Tb,A ) ≤ 2(a + b)A.
Finally, if Ii𝒩
′ ,j′ and Ii,j intersect, then either 0 ≤ αi,j − βi′ ,j′ ≤ a if αi,j ∈ Ii′ ,j′ , or
𝒫 𝒩
From these inequalities we can immediately deduce −a − b ≤ βi′ ,j′ − βi,j ≤ 0. The
inequality d((c, d), Ta+k,A ) ≤ 2A(3a + 2k) follows immediately from Claim 28. Similarly,
if the interval Ii𝒫′ ,j′ intersects one of the b positions preceding Ii,j
𝒩
, then we have two
inequalities
This implies −(a + b) ≤ αi′ ,j′ − βi,j ≤ 0, and again the result holds by Claim 28.
Using the induction hypothesis, this means that n′ ∈ I 𝒫 . However, this would mean
by conditions (iii) and (iv) that n ∈ I 𝒩 , which is a contradiction of the property (i) that
I 𝒩 and I 𝒫 are disjoint. Consequently, Right has no winning move.
Finally, suppose that n ∈ ̸ I 𝒫 ∪ I 𝒩 . We will show that Left has a winning move on n
and Right does not. Since I0,0𝒫
= [0, a[, we can assume that n ≥ a, and Left can play a.
Suppose that Left’s move to n − a is not a winning move, and let us show that Left has
a winning move to n − a − k. Since Left’s move to n − a is not a winning move, this
means that n − a ∈ Ii,j
𝒩
for some integer i, j with i + j ≥ 1. Consequently, we have n ≥ b,
and playing b is a valid move for Left. By condition (v) we cannot have n − b ∈ Ii,j 𝒩
,
since otherwise we would have n ∈ Ii,j . Moreover, we cannot have either n − b ∈ Ii′ ,j′ for
𝒫 𝒩
some (i′ , j′ ) ≠ (i, j), since it would contradict condition (ii). Consequently, n − b ∈ I ℒ ,
and using the induction hypothesis, this is a winning move for Left. The only possible
winning move for Right would be to play to a position n′ that is a 𝒫 -position. Using
the induction hypothesis, this means that n′ ∈ I 𝒫 . However, using the conditions (iii)
and (iv), this would also imply n ∈ I 𝒩 , a contradiction.
Corollary 29. Under the conditions of the theorem, the game G is ultimately ℒ.
Bibliography
[1] E. R. Berlekamp, J. H. Conway, and R. K. Guy, Winning Ways for Your Mathematical Plays, Vol. 1,
A K Peters, Ltd., New York, 2001.
[2] A. S. Fraenkel and A. Kotzig, Partizan octal games: partizan subtraction games, Internat. J. Game
Theory 16(2) (1987), 145–154.
[3] U. Larsson, N. A. McKay, R. J. Nowakowski, and A. A. Siegel, Wythoff partizan subtraction,
Internat. J Game Theory 47(2) (2018), 613–652.
[4] G. A. Mesdal III, Partizan Splittles, Games of No Chance 3, MSRI Publications 56, 2003.
[5] T. Plambeck, Notes on Partizan Subtraction Games, working notes.
[6] J. Ramírez Alfonsín, Complexity of the Frobenius problem, Combinatorica 16 (1996), 143–147.
[7] A. N. Siegel, Combinatorial Game Theory, American Mathematical Society, San Francisco, CA,
2013.
[8] J. J. Sylvester, Mathematical questions, with their solutions, Educational Times 41 (1884), 21.
[9] G. S. Lueker, Two NP-Complete Problems in Nonnegative Integer Programming, Report No. 178,
Computer Science Laboratory, Princeton University (1975).
Matthieu Dufour, Silvia Heubach, and Anh Vo
Circular Nim games CN(7, 4)
Abstract: Circular Nim is a two-player impartial combinatorial game consisting of n
stacks of tokens placed in a circle. A move consists of choosing k consecutive stacks
and taking at least one token from one or more of the stacks. The last player able to
make a move wins. The question of interest is: Who can win from a given position
if both players play optimally? This question is answered by determining the set of
𝒫 -positions from which the next player is bound to lose, no matter what moves the
player makes. We will completely characterize the set of 𝒫 -positions for n = 7 and
k = 4, adding to the known results for other games in this family. The interesting
feature of the set of 𝒫 -positions of this game is that it splits into different subsets,
unlike the structures for the previously solved games in this family.
1 Introduction
The game of Nim has been played since ancient times, and the earliest European ref-
erences to Nim are from the beginning of the sixteenth century. Its current name was
coined by Charles L. Bouton of Harvard University, who also developed the complete
theory of the game in 1902 [3]. Nim plays a central role among impartial games, as any
such game is equivalent to a Nim stack [2]. Many variations and generalizations of Nim
have been analyzed. They include subtraction games, Wythoff’s game, Nim on graphs
and on simplicial complexes, Take-away games, Fibonacci Nim, etc. [1, 5, 6, 8, 9, 10,
11, 12, 13, 15, 16]. We will study a particular case of another variation, called Circular
Nim, which was introduced in [4]. This game imposes a geometric structure on Nim
heaps, which gives rise to interesting features in the set of 𝒫 -positions.
Definition 1. In Circular Nim, n stacks of tokens are arranged in a circle. A move con-
sists of choosing k consecutive stacks and then removing at least one token from at
least one of the k stacks. Players alternate moves, and the last player who is able to
make a legal move wins. We denote this game by CN(n, k). A position in CN(n, k) is rep-
resented by the vector p = (p1 , p2 , . . . , pn ) of nonnegative entries indicating the heights
of the stacks in order around the circle. An option of p is a position to which there
is a legal move from p. We denote an option of p by p′ = (p′1 , p′2 , . . . , p′n ) and use the
notation p → p′ to denote a legal move from p to p′ .
https://doi.org/10.1515/9783110755411-009
140 | M. Dufour et al.
Circular Nim is an example of an impartial combinatorial game, one for which both
players have the same moves. For impartial games, there are only two types of posi-
tions (= outcomes classes). The outcome classes are described from the standpoint
of which player will win when playing from the given position. An 𝒩 -position in-
dicates that the Next player to play from the current position can win, whereas a
𝒫 -position indicates that the Previous player, the one who made the move to the
current position, is the one to win. Thus the current player is bound to lose from this
position, no matter what moves she or he makes. A winning strategy for a player in an
𝒩 -position is to move to one of the 𝒫 -positions. More background on combinatorial
games can be found in [1, 2]. An extensive bibliography on impartial games can be
found in [7].
Dufour and Heubach [4] proved general results on the set of 𝒫 -positions of
CN(n, 1), CN(n, n), and CN(n, n − 1) for all n. These general cases cover all games
for n ≤ 3. They also gave the results for all games with n ≤ 6, except for CN(6, 2),
and also solved the game CN(8, 6). In this paper, the main result is on the 𝒫 -positions
for CN(7, 4). One sign of the increase in complexity as n and k increase is that, un-
like in the results for the cases already proved, we no longer can describe the set of
𝒫 -positions as a single set, which makes the proofs more complicated.
To prove our main result, we use the following theorem.
Theorem 1 ([1, Theorem 2.13]). Suppose the positions of a finite impartial game can be
partitioned into mutually exclusive sets A and B with the following properties:
Circular Nim games CN(7, 4) | 141
Here is our main result, with a visualization of the 𝒫 -positions of CN(7, 4) given in
Figure 9.3. In this figure, we highlight the sum conditions by encircling stacks whose
sums have to be equal in dark blue and color any stacks that equal the sums in the
same color. Pairs of stacks that also have the same sum, but for which this is true due
to some symmetry, are encircled in lighter blue.
Note that all the subsets of S are disjoint. The condition a < max{c, d} of S4 prohibits
a pair of adjacent minima, which all other sets have. Also, S2 is disjoint from the other
sets because they all have a strict inequality condition. Finally, S1 ∩ S3 = ⌀ since a > 0
for S3 .
The following definitions and remarks will aid us in the proofs of our results. Note
that we assume a to be the minimum, not necessarily unique. In the proofs, we will
typically denote the minimal and maximal values of a target position p’ by m and M,
respectively.
Definition 2. A tub configuration xaax is a set of four adjacent stacks that consists of
a pair of adjacent minima (of the position) surrounded by two stacks of equal height.
There are three other stacks in the position, which we denote by x1 x2 x3 unless we know
the actual stack heights. A peak configuration is of the form xXx, a set of three adja-
cent stacks with x < X. If x and X are the minimum and the maximum, respectively,
of the position, then we call this configuration a minmax peak. A position with a peak
contains four other stacks, which we denote by x1 x2 x3 x4 unless we know the actual
stack heights. Finally, a position has the common sum requirement if consecutive dis-
joint pairs of adjacent stacks have the same sum, with one exceptional overlap stack
contributing to two sums.
With these definitions, we can make the following remarks regarding the specific
features of each subset of S.
Remark 1.
(1) In S3 , a < e, and the sum conditions imply that c > max{a, d} and c ≥ e.
(2) In S4 , we have the following inequalities: a < min{b, e} implies that g > max{c, d}
due to the common sum requirement. Furthermore, g > a.
(3) Positions in S1 ∪ S3 contain a tub configuration. We have
– p ∈ S1 needs to satisfy the trisum condition x1 + x2 + x3 = x,
– p ∈ S3 needs to satisfy x1 = x3 and x2 + x3 = a + x.
(4) When trying to move from a position p with a = min(p) > 0 to p′ ∈ S1 ∪ S3 , we
can create a tub configuration with minimum a′ < a by selecting a pair of stacks,
making them height a′ and then reducing the larger of the two stacks adjacent
to the pair of a′ -stacks to the height of the smaller one (if needed). This height
Circular Nim games CN(7, 4) | 143
gives the value of x in the tub configuration xa′ a′ x. Any remaining play has to
occur on x1 , the stack adjacent to the stack that was decreased to x. In labeling the
three nontub configuration stacks, we read p′ starting from the minima a′ in the
direction of the stack whose height was reduced to x. (If the two stacks adjacent to
the pair a′ a′ were already of equal height, then either one of the stacks adjacent to
the x stacks could play the role of x1 .) Note that we cannot play on the remaining
two stacks x2 and x3 , so we may not be able to meet the remaining conditions of
S1 or S3 , respectively.
(5) Positions in S4 always contain a minmax peak, whereas positions in S3 may con-
tain a peak. In either case the remaining four stacks have to satisfy that x1 + x2 =
x3 + x4 = x + X.
(6) Positions in S4 have either two or three minima. If c = a = m, then p = (m, M, m,
d, e, m, M), that is, two maxima alternate with three minima. Otherwise, the two
minima are separated by the maximum.
(7) The common sum requirement is a condition of S3 ∪ S4 and is trivially satisfied for
positions in S2 . It is relatively easy to see that we cannot make a move from a po-
sition p that satisfies the common sum requirement to p′ , which also satisfies the
common sum requirement if the location of the overlap stack remains the same.
In that case, at least one sum remains unchanged while at least one other sum is
decreased. Specifically, there is no move from S2 to S3 ∪ S4 .
Proof. To prove condition (I) of Theorem 1, we will use the equivalent statement that
there is no move from a 𝒫 -position to another 𝒫 -position. For each of the four subsets
of S, we consider moves to all the other sets. Note that the only terminal position is
0 ∈ S2 .
Moves from S1
We start with p = (0, c, d, e, f , c, 0) ∈ S1 with d + e + f = c. Note that we cannot move
to p′ ∈ S1 ∪ S2 because in either case, we would have to play on the five stacks cdefc
to simultaneously reduce the c stacks and the sum to a new value c′ < c in the case of
S1 and c′ = 0 in the case of S2 . A move to S3 is not possible since the minimum in S3 is
greater than zero. A move to p′ ∈ S4 is not possible because S4 does not have adjacent
minima by Remark 1(6). Thus no move is possible from S1 to S.
144 | M. Dufour et al.
Moves from S2
Now assume that p = (a, a, a, a, a, a, a) ∈ S2 with a > 0 because p is the terminal
position for a = 0. To move to S1 , we have to create a tub configuration of the form
x00x, which requires play on three stacks (even though we remove tokens from only
two stacks). We can at most reduce one of the three remaining stacks x1 x2 x3 = aaa, so
the sum x1 + x2 + x3 ≥ 2a, whereas x = a, so there is no move from S2 to S1 . Clearly, we
cannot move from S2 to S2 . By Remark 1(7) there is no move from S2 to S3 ∪ S4 .
Moves from S3
Let p = (a, a, c, d, e, d, c) ∈ S3 with a + c = d + e. To move to S1 ∪ S3 , we have to create
a tub configuration of the form xa′ a′ x, with a′ = 0 for p′ ∈ S1 and a′ ≤ a for p′ ∈ S3 .
First, we consider play when the minima a′ of p′ are located at the a stacks. For a move
to S1 , we play on both a stacks making them zero and then either reduce both c stacks
or one of the d stacks, but not both. In either case, we have that x ≤ c and the trisum
d′ + e + d ≥ d + e = a + c > c, so the trisum condition is not satisfied. For a move to S3 ,
the overlap stack remains at the same location, and by Remark 1(7) there is no move
to S3 .
Now we look at the cases where we create a tub configuration xa′ a′ x elsewhere.
In each case, we use play on three stacks as described in Remark 1(4). By symmetry of
positions in S3 we have to consider the three possibilities indicated in Figure 9.4a. They
are x = a with x1 x2 x3 = edc, x = a with x1 x2 x3 = dca, or x = d with x1 x2 x3 = aac (since
c > d by Remark 1(1), we read counterclockwise). By Remark 1(3) we need to satisfy the
conditions x1 + x2 + x3 = x + 0 = x + a′ for p ∈ S1 and both x1 = x3 and x2 + x3 = x + a′ for
p ∈ S3 . We will show that even if we reduce x1 to zero, then we will not be able to satisfy
the respective sum conditions. For x = a, x2 + x3 ≥ min{d + c, c + a} > a + a′ = x + a′ ,
and for x = d, x1 + x3 = a + c = d + e > d + a′ = x + a′ . Thus p′ ∉ S1 ∪ S3 . It is also not
possible to move to p′ ∈ S2 , since min{c, e} > a by Remark 1(1), so we would need to
play on five stacks to reduce cdedc to aaaaa.
To show that we cannot move from S3 to S4 , we consider the possible locations of the
minmax peak of p′ . Due to the symmetry of positions in S3 , we need to consider the
four possible peak configurations shown in Figure 9.4b: a′ aa′ with sums d + e ≤ d + c,
a′ ca′ with sums e + d = c + a, a′ da′ with sums d + c > a + a, or a′ ea′ with sums c + a (in
both cases). Note that in the first three cases, we have a′ < a because the minimum of
the minmax peak in S4 has to be strictly less than the adjacent stacks, and in each of
these cases, the a stack is one of them. We can play on one more stack adjacent to the
a′ stacks, and we play on the stack that affects the larger sum. In the first two cases the
peak sum is smaller than the smaller of the two sums, and since we can adjust only
one sum, we cannot legally move to p′ ∈ S4 . For the third case, equality with the peak
sum requires that d′ + c = d + a′ , and hence d′ = d − c + a′ < a′ because c > d by
Remark 1(1). For the last case, the overlap stack is at the same location in p and p′ ,
so by Remark 1(7) we cannot adjust all four sums with play on only four stacks. This
shows that we cannot move to p′ ∈ S4 .
Moves from S4
Finally, we check whether we can move from p = (a, b, c, d, e, a, g) ∈ S4 to p′ ∈ S. The
approach is similar to that taken when p ∈ S3 . For a move to p′ ∈ S1 ∪ S3 , we once
more need to create a tub configuration xa′ a′ x, where a′ ≤ a, and a′ = 0 for moves
to S1 . Due to the semisymmetric nature of positions in S4 , we now need to consider
all seven placements of the new pair of minima. We start by putting them at stacks
a and b and get the following cases: x = c, x1 x2 x3 = aed (since we have to reduce
g), x = a, x1 x2 x3 = eag, x = min{b, e}, x1 x2 x3 = aga (no matter which side we need
to play on), x = a, x1 x2 x3 = bag (since we need to play on c), x = d, x1 x2 x3 = abc,
x = a, x1 x2 x3 = dcb, and x = a, x1 x2 x3 = cde.
First, we look at the cases where x = a. Reducing x1 to zero, we have that x2 + x3 =
a + g = c + b = e + d > a + a ≥ a + a′ , so the sum conditions of S1 and S3 are not satisfied.
Likewise, for x = c, we have that x2 + x3 = e + d = c + b > c + a ≥ c + a′ , and for x = d,
we obtain x2 + x3 = b + c = d + e > d + a ≥ d + a′ . Finally, for x = min{b, e}, we have that
x2 + x3 = g + a = min{b, e} + max{d, c} > min{b, e} + a ≥ min{b, e} + a′ , so we cannot
move to p′ ∈ S1 ∪ S3 .
Next, we look at moves from S4 to S2 . Since a < min{b, e, g}, we have to reduce at
least those three stacks to a, which requires play on five stacks. Therefore we cannot
move from S4 to S2 .
Finally, we look at moves from S4 to S4 . If we keep the location of the minima and
hence the overlap stack, then by Remark 1(7) there is no move to p′ ∈ S4 . Thus we
need to consider whether we can create a minmax peak a′ Xa′ with a′ < a and the
remaining stacks x1 x2 x3 x4 , which satisfy x1 + x2 = x3 + x4 = a′ + X by Remark 1(5).
We can play on either x1 or x4 , but in either case, we can only modify one of the two
sums x1 + x2 and x3 + x4 . The common sum for p is s = g + a, whereas for p′ , it is
s′ = X + a′ < s. Furthermore, x2 and x3 cannot be adjusted. Let us look at the possible
cases, going clockwise and starting with new minima at the g and b stacks, for a total
146 | M. Dufour et al.
To show that we can make a legal move from any position p ∈ Sc to a position
p′ ∈ S, we partition the set Sc according to the number of zeros of p and, for positions
without a zero stack, according to the locations of the maximal stacks. We will only
need to distinguish between the cases of exactly one zero and of at least two zeros. Note
that in [14], Sc was partitioned according to the exact number of minima of p. The proof
presented here is shorter and uses some of the ideas from [14], such as Definition 3
and Lemma 1. We call out these structures and CN(3, 2)-equivalence (defined below)
because they give insight into stack configurations from which it is easy to move to
𝒫 -positions.
We define the set sums p̃ i = ∑pj ∈Ai pj and call a move a CN(3, 2) winning move if play on
the stacks in the sets Ai results in equal set sums in p′ . A CN(3, 2)-equivalent position
that has equal set sums is called a CN(3, 2)-equivalent 𝒫 -position.
CN(3, 2)-equivalent positions are perfectly suited for moves to S1 since the condi-
tions on the nonzero stacks in S1 require the equality of the trisum and two adjacent
stack heights (set sum of a single stack). However, we will also see that a CN(3, 2) win-
ning move can be used when there are additional inequality conditions on some of
the stacks as long as those conditions can be maintained. In other instances the sum
conditions may involve a stack outside the three sets, but the sum condition can be
achieved without play on that “outside” stack.
The proof of Proposition 2 will proceed as a sequence of lemmas, where we will
consider the individual cases according to the number of zeros and the location(s) of
the maximum values in the case where the position has no zero. We start by dealing
with positions that have at least two zero stacks.
Lemma 2 (Multiple zeros lemma). If p ∈ Sc and p has at least two stacks without to-
kens, then there is a move to p′ ∈ S1 ∪ S2 ∪ S4 .
Proof. Note that we will label the individual stacks as x, xi , y, and yj depending on
the symmetry of the position and the role the different stacks play. Typically, stacks
labeled x or xi are between zeros (short distance) or adjacent to zeros. Since the posi-
tions in S1 ∪ S3 ∪ S4 all have sum conditions that need to be satisfied, we will typically
use s to denote this target sum. We consider the case of two adjacent zeros, two ze-
ros separated by one stack and finally two zeros separated by two (or three) stacks,
where the distance is always assumed to be the shorter distance between any stacks.
Figure 9.5 shows the generic positions in each of the cases. Any position with at least
three zeros falls into either case (a) or case (b).
First, suppose there are two consecutive zeros in the position as shown in Fig-
ure 9.5a. Note that p = (x1 , 0, 0, x2 , y3 , y2 , y1 ) is CN(3, 2)-equivalent with sets A1 = {x1 },
A2 = {x2 }, and A3 = {y1 , y2 , y3 }. Thus we can make the CN(3, 2) winning move to p′ ∈
S1 ∪ S2 by adjusting the stacks in two of the Ai to make the set sums in p′ equal to the
minimal set sum in p. This can be achieved with play on four stacks or fewer. Note that
we move to S2 when either x1 or x2 (or both) equal zero, that is, we have at least three
consecutive zeros, and p′ is the terminal position in that case.
148 | M. Dufour et al.
Figure 9.5: Generic positions with at least two zeros. (a) Two consecutive zeros. (b) Two zeros sepa-
rated by one stack. (c) Two zeros separated by two stacks.
Now we can assume that any zeros in p are isolated, that is, they are either separated
by one stack or by two stacks. Let us first consider the case of two zeros separated by
a single stack, that is, p = (0, x, 0, y1 , y2 , y3 , y4 ) with min{x, y1 , y4 } > 0 because of the
isolated zero condition (see Figure 9.5b). Our goal is to move to S4 . Due to the zeros
(which will also be the minima in p′ ), the sum conditions of S4 reduce to x′ = y1′ +
y2′ = y3′ + y4′ with min{y1′ , y4′ } > 0. Since p is CN(3, 2)-equivalent with sets A1 = {x},
A2 = {y1 , y2 }, and A3 = {y3 , y4 }, we can make the CN(3, 2) winning move to p′ . Note
that we can achieve the condition min{y1′ , y4′ } > 0 because the original stacks were
nonzero, and any set of two stacks that is being played on can be adjusted to achieve
the desired sum without making y1 or y4 equal to zero because x > 0 by the assumption
of the isolated zeros. However, if in the process we need to make y2′ = y3′ = 0, then the
resulting position is in S1 .
Next we turn to the case where the zeros are separated by two stacks, that is,
p = (0, x1 , x2 , 0, y2 , y, y1 ) with min{x1 , x2 , y1 , y2 } > 0 since we assume isolated zeros (see
Figure 9.5c). We also assume without loss of generality that y2 ≥ y1 . Now we need to
consider two subcases y1 ≥ x1 and y1 < x1 . Note that for each of the subcases, the sum
s will be defined on a case-by-case basis.
In the first case, we let s = min{x1 + x2 , y1 } and move to p′ = (0, x1 , x2′ , 0, s, 0, s) ∈ S4
with x1 + x2′ = s. While this looks as if there is play on five stacks, actually either x2 or y1
will remain the same. If s = y1 , then play is on the x2 , 0, y2 , and y stacks, and because
x1 ≤ y1 , we have x2′ = s − x1 = y1 − x1 ≥ 0. If s = x1 + x2 , then play is on the three y stacks.
Now we look at y1 < x1 , which is a little bit more involved. Here our goal is to move
to S1 , so we need to create a pair of zeros. Since y2 ≥ y1 , we choose x2′ = 0 and show
that we can make x1′ , y2′ , and the trisum 0 + y1′ + y′ equal in p′ . Let s = min{x1 , y1 + y, y2 }.
Unless s = y2 with y > y2 , we can move to p′ = (s, 0, 0, s, y′ , y1′ , 0) with y′ + y1′ = s by
playing on at most four stacks as follows: If s = x1 , then play is on stacks x2 , y2 , and y
with y′ = s − y1 = x1 − y1 > 0. If s = y1 + y, then play is on stacks x1 , x2 , and y2 . Finally,
if s = y2 ≥ y, then play is on stacks x2 , x1 , and y1 with y1′ = y2 − y ≥ 0.
This leaves the case of y1 < x1 , y1 ≤ y2 , s = y2 < {x1 , y1 + y} with y > y2 unresolved.
This set of inequalities can be simplified to y1 ≤ y2 , y1 < x1 , and y2 < {x1 , y}. Note specif-
Circular Nim games CN(7, 4) | 149
ically that y > yi for i = 1, 2. We need to make further distinctions as to where the maxi-
mal value occurs. In all cases, we will move to S1 , but the location of the maximal value
determines where the pair of adjacent zeros is created. Let M = max(p) = max{x1 , x2 , y}
(all other stacks cannot be maximal due to the inequalities).
First, we consider the case where the maximal value occurs next to a zero, that is,
M = x1 or M = x2 . Let s = min{x1 + y1 , x2 + y2 , y} and assume that M = x1 . We claim that
there is a legal move to p′ ∈ S1 where p′ = (0, s, x2′ , 0, y2 , s, 0) with x2′ + y2 = s. Note that
M = x1 implies that s < x1 + y1 because s = x1 + y1 leads to a contradiction. Specifically,
because yi > 0 due to isolated zeros, we would have x1 < x1 + y1 = s ≤ y ≤ M = x1 . If
s = x2 + y2 , then play is on stacks x1 , 0, y1 , and y, and it is a legal move since x1 = M ≥
y ≥ s. If s = y, then play is on stacks y1 , 0, x1 , and x2 with x2′ = s − y2 = y − y2 > 0. Since
y > yi , the same proof, except with subscripts 1 and 2 changing places, applies when
M = x2 .
The final case is where M = y > max{x1 , x2 }. We first consider x1 > x2 and let
s = min{x1 , x2 + y2 }. Then the move is to p′ = (0, s, x2 , 0, y2′ , s, 0) ∈ S1 with x2 + y2′ = s. If
s = x1 , then play is on y1 , y, and y2 . The move is legal since y > x1 and y2′ = x1 − x2 > 0.
On the other hand, if s = x2 + y2 , then play is on stacks y, y1 , 0, and x1 , and y > x1 > s.
When x1 ≤ x2 , then the move is to p′ = (0, x1 , s, 0, 0, s, y1′ ) ∈ S1 with y1′ + x1 = s and
s = min{x2 , x1 +y1 }. The proof follows like in the case x1 > x2 . This completes the case of
two zeros that are two stacks apart and therefore the case of more than two zeros.
Lemma 3 (Unique zero lemma). If a position p ∈ Sc has a unique zero, then there is a
move to p′ ∈ S.
Proof. The generic position for this case is shown in Figure 9.6. Note that due to the
assumption of the unique zero, we have that all other stack heights are nonzero, so
xi > 0 and yi > 0 for i = 1, 2, 3. We may also assume without loss of generality that
x2 ≥ y2 . We will see that in almost all cases, we can move to S1 ; there is a single sub-
case where we will move to S4 . Table 9.1 gives a quick overview of the structure of the
subcases.
150 | M. Dufour et al.
x1 + y1 ≤ min{x2 , y2 } = y2 (a) p′ ∈ S1
y2 ≥ y1 (b) p′ ∈ S1
x1 + y1 > y2 x2 ≥ y1 (c) p′ ∈ S1
y2 < y1
x2 < y1 (d) p′ ∈ S1 ∪ S4
Finally, we deal with the case where the position p has no zero. In this case, we
divide the positions according to where the maximum is located in relation to other
maxima (if any). Note that when min(p) > 0, there is a close relation between positions
in S3 and S4 . A position p = (m, M, m, p4 , p5 , p6 , p7 ) with p4 + p5 = p6 + p7 = M + m and
min{p4 , p7 } > m is in S4 if max{p5 , p6 } > m and is in S3 if p5 = p6 = m. Therefore there is
a move to S3 ∪ S4 , and we need only check on the sum and minimum conditions. This
property will be used repeatedly in the maximum lemma.
Lemma 4 (Maximum lemma). Let p ∈ Sc with min(p) > 0. Then there is a move from p
to p′ ∈ S.
Proof. Let M = max(p). We will first look at the antipodal case, where we have
two maxima “opposite” (at distance two) of each other. The generic position is
p = (x1 , x2 , M, y3 , y2 , y1 , M) shown in Figure 9.7b.
Figure 9.7: Generic positions for antipodal maxima. (a) y3 = M and (b) y3 < M.
Table 9.2 shows the subcases we will consider for antipodal maxima. Without loss of
generality, we may assume that y3 ≤ y1 .
y3 = M (a) p′ ∈ S1 ∪ S3 ∪ S4
y2 + y3 ≤ M (b1) p′ ∈ S1
y3 < M x1 ≥ x2 (b2) p′ ∈ S3
y2 + y3 > M
x1 < x2 (b3) p′ ∈ S3 ∪ S4
(a) We start with the case M = y3 shown in Figure 9.7a. Note that since y3 ≤ y1 , we also
have y1 = M. In this case the generic position becomes p = (x1 , x2 , M, M, y, M, M).
We may also assume in this case that without loss of generality, x1 ≤ x2 . If x1 + x2 <
M, then Mx1 x2 MM forms a shallow valley, and there is a move to S1 . Now assume
that M ≤ x1 + x2 ≤ M + y. In this case, there is a move to p′ = (x1 , x2 , x1 , M, x1 + x2 −
152 | M. Dufour et al.
We now assume that M > y3 (see Figure 9.7b) and consider the subcases listed in
Table 9.2.
(b1) Since M ≥ y2 + y3 , position p is either shallow valley (if y1 + y2 + y3 > M) or deep
valley (if y1 + y2 + y3 ≤ M), so there is a move to p′ ∈ S1 .
This completes the case of antipodal maxima. We now consider the case where
M > max{x3 , y3 }, so the stacks that are opposite of M have strictly smaller height.
Our generic position is shown in Figure 9.8. Without loss of generality, we may as-
sume that x1 ≤ y1 . Once more, we move to either p′ ∈ S1 or p′ ∈ S3 ∪ S4 . Now let
s = min{M + x1 , x2 + x3 , y2 + y3 }.
– If s = M + x1 , then we can move to p′ = (M, x1 , y2 , y3′ , x3′ , x2 , x1 ) ∈ S3 ∪ S4 with
x2 + x3′ = y2 + y3′ = M + x1 . Play is on the yi stacks and x3 ; the move is legal because
x1 ≤ y1 by assumption, x3′ = M + x1 − x2 ≤ x3 , and x3′ > 0 since M = max(p)
and all stack heights are positive. Likewise, 0 < y3′ ≤ y3 . It remains to show that
min{x2 , y2 } > x1 . By assumption, M > max{x3 , y3 } which implies both 0 < M − x3 ≤
x2 − x1 and 0 < M − y3 ≤ y2 − x1 , so the move is legal.
Circular Nim games CN(7, 4) | 153
These three lemmas together prove Proposition 2, because each position either
has multiple zeros, a unique zero, or no zero. Together with Proposition 1 and Theo-
rem 1, we have shown that the set S of Theorem 2 is the set of 𝒫 -positions of CN(7, 4).
3 Discussion
Our goal in the investigations of CN(n, k) has always been to find a general structure of
the 𝒫 -positions for families of games. So far we have found such results for CN(n, 1),
CN(n, n), and CN(n, n − 1) (see [4]). In addition, in all previous results for CN(n, k), we
have been able to find a single description of the 𝒫 -positions. The case of CN(7, 4) is
seemingly an anomaly in that four different sets make up the 𝒫 -positions. However,
a careful look at the 𝒫 -positions of CN(3, 2), CN(5, 3), and CN(7, 4), which are all ex-
amples of CN(2ℓ + 1, ℓ + 1), reveals a common structure. Recall that the 𝒫 -positions
of CN(3, 2) are given by {a, a, a} for a ≥ 0 and the 𝒫 -positions of CN(5, 3) are given by
{(x, 0, x, a, b) | x = a + b}. This leads to the following result.
154 | M. Dufour et al.
Lemma 5. In the game CN(2ℓ + 1, ℓ + 1) the set of 𝒫 -positions contains the set S1 , where
ℓ
0, . . . , 0, x, a1 , . . . , aℓ ) ∑ ai = x}.
S1 = {p = (x, ⏟⏟⏟⏟⏟⏟⏟⏟⏟⏟⏟⏟⏟
ℓ−1 i=1
Proof. Observe that all positions in CN(2ℓ + 1, ℓ + 1) with ℓ − 1 consecutive zeros are
CN(3, 2)-equivalent with sets {p1 }, {pℓ+1 }, and {pℓ+2 , . . . , p2ℓ+1 }, and that S1 consists pre-
cisely of the CN(3, 2)-equivalent 𝒫 -positions. Therefore we cannot make a move from
S1 to S1 because this would amount to a move from a 𝒫 -position in CN(3, 2) to another
𝒫 -position in CN(3, 2). On the other hand, we can make a CN(3, 2) winning move into
S1 from any position in CN(2ℓ + 1, ℓ + 1) that has ℓ − 1 consecutive zeros. Therefore S1
must be a subset of the 𝒫 -positions of CN(2ℓ + 1, ℓ + 1).
Although Lemma 5 does not settle the question regarding the set of 𝒫 -positions
of the family of games CN(2ℓ + 1, ℓ + 1), the result shows that the set S1 for CN(7, 4),
which has the requirement of the zero minima, is not an anomaly, but a fixture among
the 𝒫 -positions of this family of games. Note that for CN(3, 2) and CN(5, 3), the set of
𝒫 -positions equals S1 . These two games are too small to show the more general struc-
ture of the 𝒫 -positions of this family. The question arises whether there are gener-
alizations of S2 , S3 , or S4 that constitute a part of the 𝒫 -positions of other games in
this family. The obvious candidate would be S2 with all equal stack heights. Interest-
ingly enough, this set is NOT a part of the 𝒫 -positions (except for the terminal po-
sition) of CN(9, 5). For example, the position (2, 2, 2, 2, 2, 2, 2, 2, 2) is an 𝒩 -position of
CN(9, 5).
Bibliography
[1] M. H. Albert, R. J. Nowakowski, and D. Wolfe, Lessons in Play, A. K. Peters Ltd., Wellesley, MA,
2007.
[2] E. R. Berlekamp, J. H. Conway, and R. K. Guy, Winning Ways for Your Mathematical Plays,
second edition, A. K. Peters Ltd., Wellesley, MA, 2014.
[3] C. L. Bouton, Nim, a game with a complete mathematical theory, Ann. of Math. (2) 3, (1901/02),
35–39.
[4] M. Dufour and S. Heubach, Circular Nim games, Electron. J. Combin. 20(2), (2013), P22.
[5] R. Ehrenborg and E. Steingrímsson, Playing Nim on a simplicial complex, Electron. J. Combin.
3(1), (1996), R9, 33 pages.
[6] T. S. Ferguson, Some chip transfer games, Theoret. Comput. Sci. 191 (1998), 157–171.
[7] A. S. Fraenkel, Combinatorial games: selected bibliography with succinct gourmet introduction,
Electron. J. Combin. DS2 (2012), 109 pages.
[8] D. Gale, A curious Nim-type game, Amer. Math. Monthly 81 (1974), 876–879.
[9] R. K. Guy, Impartial games, in Games of No Chance, MSRI Publications, 29 (1996), 61–78.
[10] D. Horrocks, Winning positions in simplicial Nim, Electron. J. Combin. 17 (1) (2010), 13 pages.
[11] E. H. Moore, A generalization of a game called Nim, Ann. of Math. 11 (1910), 93–94.
[12] A. J. Schwenk, Take-away games, Fibonacci Quart. 8, (1970), 225–234.
Circular Nim games CN(7, 4) | 155
[13] R. Sprague, Über zwei Abarten von Nim, Tohoku Math. J. 43 (1937), 351–354.
[14] A. Vo and S. Heubach, Circular Nim games CN(7,4). Cal State LA, (2018), 75 pages (thesis).
[15] M. J. Whinihan, Fibonacci Nim, Fibonacci Quart. 1–4 (1963), 9–13.
[16] W. A. Wythoff, A modification of the game of Nim, Nieuw Arch. Wiskd. 7 (1907), 199–202.
Aaron Dwyer, Rebecca Milley, and Michael Willette
Misère domineering on 2 × n boards
Abstract: Domineering is a well-studied tiling game, in which one player places verti-
cal dominoes, and a second places horizontal dominoes, alternating turns until some-
one cannot place on their turn. Previous research has found game outcomes and val-
ues for certain rectangular boards under normal play (last move wins); however, noth-
ing has been published about domineering under misère play (last move loses). We
find optimal-play outcomes for all 2 × n boards under misère play: these games are
Right-win for n ⩾ 12. We also present algebraic results including sums, inverses, and
comparisons in misère domineering.
1 Introduction
The game of domineering has two players alternately placing dominoes to tile a check-
erboard or any other grid. The player called Left can only place dominoes in a vertical
orientation, and the player called Right can only place horizontally. Domineering is
a combinatorial game because there is perfect information and no chance, and it is
partizan (as opposed to impartial) because the two players have different move op-
tions. In normal-play combinatorial games, the first player unable to move on her/his
turn loses; under misère play, the first player unable to move is the winner. This paper
considers domineering under misère play.
A game G is defined by the sets of Left options and Right options that the corre-
sponding player can reach with a single move. We use Gm×n to denote a game of dom-
ineering on an empty m × n board. So, for example, G2×2 has one Left option to G2×1
and one Right option to G1×2 .
Acknowledgement: A. Dwyer was supported in part by a Natural Sciences and Engineering Research
Council of Canada Undergraduate Student Research Award. R. Milley was supported in part by a Natural
Sciences and Engineering Research Council of Canada Discovery Grant. M. Willette was supported
in part by a Natural Sciences and Engineering Research Council of Canada Undergraduate Student
Research Award.
Aaron Dwyer, School of Mathematics and Statistics, Carleton University, Ottawa, Canada, e-mail:
aarondwyer@cmail.carelton.ca
Rebecca Milley, Computational Mathematics, Grenfell Campus, Memorial University, St. John’s,
Canada, e-mail: rmilley@grenfell.mun.ca
Michael Willette, Department of Applied Mathematics, University of Waterloo, Waterloo, Canada,
e-mail: mwillette@uwaterloo.ca
https://doi.org/10.1515/9783110755411-010
158 | A. Dwyer et al.
Given any game position G, the outcome o(G) is the winner under optimal play.
There are four possibilities:
{
{ ℒ if Left wins G whether she goes first or second,
{
{
{
{ℛ if Right wins G whether he goes first or second,
o(G) = {
{𝒩
{
{ if the next player to move in G wins,
{
{
{𝒫 if the previous player (i.e., not the next player) wins.
By o− (G) we denote the outcome of G under misère play and by o+ (G) the outcome
under normal play. For example, o− ( ) = 𝒩 and o+ ( ) = ℛ. The zero game, in
which there are no moves for either player (e. g., a 1 × 1 board in domineering), has
o− (0) = 𝒩 and o+ (0) = 𝒫 . The negative of a game G, denoted −G, is the game G with
the roles of Left and Right swapped; in domineering, this is equivalent to rotating G
by 90 degrees. The disjunctive sum of two games G and H is the game G + H in which,
on their turn, a player can choose to play in G or in H. In domineering, as players
place pieces, a single connected board often breaks into a disjunctive sum of disjoint
boards: for example, if Left plays in the third column of G2×6 , then the new position is
G2×2 + G2×3 .
Two games G and H are equal if they can be interchanged in any sum without
affecting the outcome, that is, if o(G +X) = o(H +X) for any sum of games X. Inequality
is defined by G ⩾ H if o(G + X) ⩾ o(H + X) for all X, where outcomes are ordered
according to preference by Left: ℒ > 𝒩 > ℛ and ℒ > 𝒫 > ℛ with incomparable 𝒩
and 𝒫 . Equality and inequality are dependent on the ending condition; games can
be equal or comparable in normal play but not in misère play, etc. In normal play,
G + (−G) = 0 for all games G.
Normal-play domineering has been the subject of numerous papers by math-
ematicians and computer scientists. Elwyn Berlekamp found the normal-play out-
comes and values for positions in 2 × n and 3 × n domineering in his 1988 paper [2].
Since that time, computer programs have been developed to find the normal-play
outcome of rectangular boards: up to 9 × 9 were solved by the computer program
developed in [3]; this was extended to 10 × 10 by [4] and finally to 11 × 11 by [10]. In
[5], theoretical and computational techniques were used to determine the outcomes
of all 2 × n boards under normal play: for n ⩾ 28, the boards are all Right-win.
What about misère play? The primary purpose of this paper is finding outcomes
of all 2 × n games of domineering under misère play. In general, misère play is much
less studied; although the standard definitions of addition, negation, equality, and
inequality can be applied, there are many problems with the algebra. For example, if
G ≠ 0, then G and −G never sum to zero in general misère play [7], and even in re-
stricted play (see Section 3), most games are not invertible. Another problem is that
knowing the misère outcome of two games gives no information about the misère out-
come of their sum [7]; in Section 3.1, we show that this property is true even when
Misère domineering on 2 × n boards | 159
restricted to domineering positions. For these and other reasons, it is much more diffi-
cult to analyze misère games using the usual game-theoretic techniques. Indeed, our
solution for 2 × n boards is purely combinatorial.
The remainder of the paper is structured as follows. In Section 2, we present the
solution for 2 × n domineering. In Section 3, we consider a number of algebraic prop-
erties of misère domineering, including outcomes of sums (Section 3.1), invertibility
(Section 3.2), and comparisons (Section 3.3) of certain 2 × n positions. Section 4 gives
a summary and further discussion.
2G2×2 G2×4
Lemma 1. The misère outcome of (2k)G2×2 is next-win, and the misère outcome of (2k +
1)G2×2 is previous-win.
Proof. We show winning strategies for Right, and the strategies for Left follow by sym-
metry. Right playing first on an even sum of 2 × 2 boards should use his first k moves to
“claim” half of the boards, placing one piece in each of k different boards. Left cannot
prevent this. Right should use the next k moves to play a second piece in each of those
boards (i. e., Right plays in all the positions he just created). In total, Right places
2k pieces. During this time, there are exactly 2k moves available for Left among the
other k boards. Left as the second player will get the last move, and so Right wins.
The same strategy works for Right playing second on an odd sum of 2 × 2 boards:
this time, after Left and Right have each made 2k moves, there is an extra 2 × 2 board
remaining (or possibly two 2×1 boards), and it is Left’s turn next. Left is forced to move
to , and Right wins.
n 0 1 2 3 4 5 6 7 8 9 10 11
o (G2×n )
−
𝒩 ℛ 𝒫 ℒ 𝒩 ℛ 𝒫 𝒩 𝒩 ℛ ℛ 𝒩
These initial cases do not indicate a pattern in the outcomes; fortunately, the next 12
(solved computationally) do:
n 12 13 14 15 16 17 18 19 20 21 22 23
o− (G2×n ) ℛ ℛ ℛ ℛ ℛ ℛ ℛ ℛ ℛ ℛ ℛ ℛ
As in normal play, Right appears to have the advantage in a 2×n domineering board for
large enough n. Indeed, we will now show that for n ⩾ 12, all 2×n boards are Right-win.
(Interestingly, in normal play, other outcomes are possible until n ⩾ 28.) The strategy
for Right depends on the congruency of n modulo 4, and so we prove the result across
four separate theorems (Theorems 1–4).
To begin, we define some standards moves for Right in 2 × n domineering (see
Figure 10.1). Two Right pieces are adjacent if they are in the same row and occupy
consecutive columns, stacked if they are in the same two columns of different rows,
and staggered if they are in different rows and share exactly one column. We say “Right
makes a stacked move” to mean that Right places a piece that creates a pair of stacked
pieces. In some of the strategies described further, Right places two adjacent pieces to
guarantee that he can make a staggered move later in the game. We must show that
Left cannot prevent Right from placing one or two pairs of adjacent pieces, as needed,
as long as n is sufficiently large; this is done in Lemma 2.
(iii) Right moving first can place his first four pieces as two disconnected pairs of adja-
cent pieces if n ⩾ 19.
(iv) Right moving second can place his first four pieces as two disconnected pairs of
adjacent pieces if n ⩾ 24.
Proof.
(i) If n ⩾ 6, then Right moving first can play in the middle of an empty 2 × 6 section
of the board. Left can only reply on one side or the other, and then Right’s second
piece can be placed on the opposite side, adjacent to his first piece. See Figure 10.2.
Figure 10.2: Right playing first in a 2 × 6 can place two pieces adjacent.
(ii) If n ⩾ 12 and Left plays first, then there is an empty 2 × 6 section on one side or the
other of Left’s first piece. By (i) Right can place two pieces adjacent in that section.
(iii) If n ⩾ 19, then Right’s first move should be in the middle of the first 2 × 6 section
of the board (i. e., across columns 3 and 4).
If Left’s first move is within that 2 × 6 section, Right should immediately place
the adjacent piece as in (i). Now there is still at least an empty 2 × 12 section of
the board starting after column 7, and so by (ii) Right playing second from here
can place another two adjacent pieces after column 7. Note that Right may have to
avoid column 7 to ensure the pairs of adjacent pieces to be not connected.
If Left’s first move is not in the original 2 × 6 section, but rather somewhere in
columns 7 to 19, then as in (ii), Right can play in an empty 2 × 6 section within the
last 12 columns. Right can place his third and fourth pieces adjacent to his first
and second pieces (or in the other order, if threatened by Left).
(iv) If n ⩾ 24, then Right moving second can place a piece in the middle of the first
2 × 6 section of the board, assuming (without loss of generality) that Left placed
her first piece in the second half of the board.
If Left replies within that section, then Right will place his adjacent piece as in (i).
Left then makes a third move with at most two in the latter 24 − 7 = 17 columns of
162 | A. Dwyer et al.
the board (Right will avoid column 7 to ensure the pairs of adjacent pieces to be
not connected). At most two Left moves in a 2 × 17 section will necessarily leave
an empty 2 × 6 section, with Right to move next, so Right can place another two
adjacent pieces by (i).
If Left does not reply in the first 2 × 6 section, then after her second move, Left has
placed two pieces in the rightmost 2 × 18 section of the board; this still guaran-
tees an empty 2 × 6 section in the rightmost 2 × 17 section (avoiding column 7),
in which Right can place his second piece. As in (iii), Right can place his third
and fourth pieces adjacent to his first and second pieces (or second and first, if
necessary).
As noted above, Right’s strategy for G2×n will depend on the congruency of n mod-
ulo 4. In several cases the strategy will lead to a position of the form shown in Fig-
ure 10.3: a 2 × n position whose empty squares consist of an equal number of 2 × 1
and 1 × 4 sections, where the 1 × 4 sections are not adjacent to each other (but may be
connected by one or more of the 2 × 1 sections). Lemma 3 shows that Right can always
win these particular end-game positions.
(a)
(b)
(c)
Figure 10.3: Positions in 2 × n domineering whose empty squares consist of nonadjacent 1 × 4 pieces
and the same number of (possibly connected or connecting) 2 × 1 pieces.
Proof. Right should play in the middle of each 1 × 4 section; meanwhile, Left has no
choice but to take the 2 × 1 sections one at a time. Right may temporarily create a piece
like or or , but Left will play in the 2 × 1 section(s) of those before Right
runs out of 1×4 middle moves, because there are the same number of (2×1)s as (1×4)s.
When Left takes the last 2 × 1, there are no moves remaining, and Right wins.
Misère domineering on 2 × n boards | 163
Proof. Assume that n = 4k ⩾ 12. If Right plays first on a 2 × n board, he should pretend
that the board is cut into 2 × 2 pieces and play the winning next-player strategy (as per
Lemma 1). Right can do this by first placing k pieces anywhere along the bottom row,
effectively claiming k 2 × 2 boards, and then playing directly above those k pieces. Left
cannot prevent Right from making these stacked moves. With each of the first k Right-
Left moves, three bottom-row spaces are taken, so that after Left’s kth move, exactly
k of the 4k columns remain empty. Left will be forced to take all k of these spaces as
Right plays his k stacked moves in the top row, and since Right went first, Right will
run out of moves first.
Right playing second is not as straightforward; Right should not play as if the
board were cut into (2k)G2×2 because that is a next-win position. Right must change the
parity using a staggered move. To set himself up for a staggered move at the end of the
game, Right will place two pieces adjacent, which we know he can do by Lemma 2(ii).
Here is Right’s strategy: place two adjacent pieces in the bottom row and then place
k − 2 more bottom pieces, for a total of k bottom pieces, as before. Since Left went first,
after Right’s kth move, there are 4k − 3k = k empty columns. Now Left has to begin
taking those empty columns. Right plays k − 2 stacked moves above all but his first two
pieces, and after that, there are two empty columns remaining, as well as an empty
1 × 4 section above Right’s first two pieces. It is Left’s turn: she takes one column, leav-
ing exactly one 1 × 4 and one 2 × 1, possibly connected. By Lemma 3 Right wins from
here with a staggered move.
We see for n ≡ 0 (mod 4) that Right playing first is “easy” and involves only
stacked moves for Right, whereas Right playing second requires Right to break par-
ity using a staggered move. We will see the same situation (but vice versa) for n ≡ 2
(mod 4). The hardest case is n ≡ 3 (mod 4), where staggered moves are required for
Right going first and second. It turns out that n ≡ 1 (mod 4) is the simplest case: Right
only ever needs to place stacked pieces, going first or second.
o− (G2×4k+1 ) = ℛ for n = 4k + 1.
Proof. The case n = 1 is clear. For larger n = 4k + 1, Right playing first or second should
follow the “cut up” strategy from the 4k case, that is, Right should place his first k
pieces in the bottom row and then place k stacked pieces. After each player has made
164 | A. Dwyer et al.
2k moves, Right has occupied 2k columns, and Left has occupied 2k columns, leaving
exactly one column empty. If it is Right’s turn next, then he has no move and wins; if
it is Left’s turn next, then she takes the last empty column, and then Right wins.
Proof. For n = 4k + 2 ⩾ 22, Right playing second should place k pieces in the bottom
row followed by k stacked moves in the top row. After Right’s (2k)th move, each player
has taken 2k columns, so that only 2 columns remain, with Left to move. The columns
could be adjacent, forming a 2 × 2 square or not; either way, Left moving next loses.
Recall that the first player in an odd sum of 2 × 2 boards not only loses, but loses
with another move to spare; e. g., if Right playing first here places only stacked pieces,
then Left will run out of moves, and there will still be another 1 × 2 position remaining.
So to prevent Left from winning, it will not suffice to make a single staggered move as
in the 4k case; Right will have to arrange to make two staggered moves to force Left
into the last move. Right should use his first four moves to place two pairs of adjacent
pieces in the bottom row, not all adjacent, as per Lemma 2(iii). Right should then place
another k − 3 pieces in the bottom row for a total of k + 1 moves (across 2k + 2 columns)
and then place k − 3 stacked pieces above the latter bottom moves. In this time, Left
has taken 2k − 2 columns, so that two empty columns remain, along with two empty
1 × 4 sections above Right’s first four moves. By Lemma 3 Right wins playing first from
here with two staggered moves.
Proof. Assume that n = 4k+3 ⩾ 27. Right playing first will aim to set himself up to make
a staggered move at the end of the game. Since n > 6, Lemma 2(i) tells us that Right can
place his first two pieces adjacent in the bottom row. Right should then place another
k − 1 pieces in the bottom row for a total of k + 1 moves; after these k + 1 Right–Left
moves, 3(k + 1) spaces have been taken in the bottom row, leaving (4k + 3) − (3k + 3) = k
empty columns. Next, Right makes k − 1 stacked moves above his latter k − 1 bottom
moves, whereas Left places in k − 1 columns, leaving exactly one empty column along
with one empty 1 × 4 section above Right’s first two pieces. By Lemma 3 Right wins
playing first from here with a staggered move.
Right playing second will set himself up to make two staggered moves at the end
of the game. By Lemma 2(iv) Right playing second with n ⩾ 27 can use his first four
moves to place two pairs of adjacent pieces, not all adjacent, in the bottom row. Right
Misère domineering on 2 × n boards | 165
should then make another k − 3 moves in the bottom row for a total of k + 1; after these
k + 1 Left–Right moves, there are (4k + 3) − 3(k + 1) = k empty columns. Now Right
makes k − 3 stacked moves, whereas Left takes k − 3 columns, leaving three empty
columns, along with two empty 1 × 4 sections above Right’s first four moves, with left
to move. Left has to take one of the empty columns, which leaves two empty columns
and two empty 1×4s. By Lemma 3 Right wins from here with two consecutive staggered
moves.
With Theorems 1–4 and the base cases (n = 14, 15, 18, 19, 23) obtained computa-
tionally, we have the following main result.
We have found that this property of misère games holds even if restricted to domineer-
ing; in fact, our examples (given in Table 10.1) are restricted to domineering positions
that fit within 2 × n and n × 2 boards.
3.2 Invertibility
For this and the next subsection, we need to define equivalence and inequality in re-
stricted game play. Two games are equivalent modulo 𝒰 for a set (universe) of games 𝒰
if they can be interchanged in any sum of games from 𝒰 without affecting the outcome:
Note that this equivalence relation is weaker than the usual equality of games, for
which 𝒰 is taken to be any sum of game positions.
166 | A. Dwyer et al.
Table 10.1: Outcomes of sums of domineering positions, demonstrating the lawless addition of mis-
ère play.
ℒ 𝒩 𝒫 ℛ
𝒫 +𝒫 + + ( + )+ +
𝒫 +𝒩 + +( + ) +0 +
𝒫 +ℒ + + ( + )+ +
𝒫 +ℛ + + ( + )+ +
𝒩 +𝒩 ( + )+ 0+0 ( + )+ ( + )+
𝒩 +ℒ 0+ + + +
𝒩 +ℛ 0+ + + +
ℒ+ℒ + ( + )+ ( + )+ ( + )+
ℒ+ℛ + + ( + )+ +( + )
ℛ+ℛ + ( + )+ ( + )+ ( + )+
In general misère play (i. e., when 𝒰 is the set of all games), G+(−G) is not equal to zero
for any nonzero G [7], but games may be invertible modulo restricted universes. Let ℰ
be the universe of dead-ending games defined by the following property: if a player is
currently unable to move in a position, then they are never subsequently able to move
in that position, even after play by the opponent. For example, John Conway’s game
Hackenbush is dead-ending, whereas Richard Guy’s Toads and Frogs is not.3 Domi-
neering is dead-ending. From [8] we know that all ends (games in which at least one
player has no move) are invertible modulo ℰ ; therefore all 1 × n domineering boards
are invertible. However, most positions are not invertible, even modulo ℰ . For exam-
ple, the game ∗, which occurs as the board in domineering, is not invertible mod-
4
ulo ℰ .
It is an open question to classify all invertible dead-ending positions. We have
found the positions given in Theorem 5 to be the only modulo-ℰ invertible domineer-
3 The position T F F has no available move for Left (Toad), but if Right (Frog) jumps, then
Left will have a move.
4 The game ∗ is not invertible in any universe containing the game “1” ( in domineering) [1].
Misère domineering on 2 × n boards | 167
ing boards with game trees of depth 2 (i. e., games of rank 2). This was determined
computationally using a recursive test from [6] to check G + (−G) for equivalence to
zero modulo ℰ , but we prove the invertibility here directly, with the definition of equiv-
alence.
Proof. We will show that each of these positions satisfies G + (−G) ≡ℰ 0, i. e., o(G +
−G + X) = o(X) for all X ∈ ℰ . For all other rank-2 domineering boards G, the position
G + (−G) can be distinguished from 0 with X = or X = .
(1) The position has the same game tree as + and so is actually equivalent to
zero modulo ℰ .
(2) The position has the same game tree as + , which we know to be in-
vertible as it is an end.
(3) To show − ≡ℰ 0, we will show o( − + X) = o(X) for all dead-
ending games X. Suppose Left wins X playing first (playing second follows analo-
gously). Left should follow the same strategy on − + X; if Right plays in
the − component, then Left can reply with the inverse, as all options of
are invertible, bringing that component to zero. Left then resumes winning
on X. If Right does not play in − , then when Left runs out of moves in X,
say at a left end X ′ , she should play − +X ′ to +X ′ , leaving a position
with no Left moves and at least one Right move. By the definition of dead-ending
games Left has no further moves and so wins.
(4) Because all options of are invertible, the proof for − ≡ℰ 0 is almost
identical to the proof for (3). The only additional consideration is when Left runs
out of moves in X, say at X ′ . At that point, Left should play in the , bringing
the position to + + X ′ . From here Right has at least two moves, and Left has
only one, so Left will win.
(5) The proof for − ≡ℰ 0 is similar, except that Right could play in the
− +X to − +X. Left cannot just play to − +X, because −
is not equivalent to zero. Instead, Left should take the and leave − + X. If
Right plays in − , then Left can bring that position to zero and resume winning
on X; otherwise, Left runs out of moves in X and plays − to , leaving a left
end with at least one move for Right.
(6) The proof for is nearly identical to that for .
168 | A. Dwyer et al.
3.3 Comparisons
Inequality modulo 𝒰 is defined by
Comparability is much less common and much harder to prove in misère play than in
normal play. Even among just domineering positions, we cannot say that Left would
always prefer the zero game to the position ; there are situations in which Left would
rather have an extra move than not, including when playing first on .
In Section 2, we claimed that + ⩾ℰ . We give the justification here. It
requires a series of inequalities that build upon each other and the hand-tying princi-
ple. The hand-tying principle says that if G and H have identical Right options, and
the Left options of H are a nonempty5 subset of the Left options of G, then G ⩾ H.
This is true because if Left has a good move in H, then that same move is available
in G. Similarly, if G and H have identical Left options and the Right options of G are a
nonempty subset of the Right options of H, then G ⩾ H, because Right prefers H.
The inequalities in Proposition 1 follow from Theorem 6, a weaker version of the
3-part comparison test for ℰ proven in [6]. Let GL (GR ) denote a single Left (Right) op-
tion of G.
Theorem 6. If G, H ∈ ℰ and
(1) for every GR , there is an H R such that H R ⩽ℰ GR , and
(2) for every H L , there is a GL such that GL ⩾ℰ H L ,
then G ⩾ℰ H.
Proof. Let X ∈ ℰ and assume that Right wins G + X. We must show that Right wins
H + X. Right should follow his strategy for G + X. If at some follower H + X ′ the good
Right move in G + X ′ would be GR + X ′ , then there is an H R ⩽ℰ GR , so Right will do just
as well playing to H R + X ′ . Otherwise, if Right does not move in the H component first,
then at some point, Left moves to H L + X ′ . However, for this H L , there is a GL ⩾ℰ H L ,
so H L + X ′ is better for Right than GL + X ′ , and Right would have a winning reply to
any such GL + X ′ . So Right wins from H L + X ′ .
The following inequalities can now be verified using the hand-tying principle,
Theorem 6, and earlier inequalities.
Proposition 1. The following comparisons are true modulo ℰ , the set of dead-ending
games.
1. + ⩾ℰ (i. e., 0 ⩾ℰ )
5 This is required in misère play; in normal play, inequality holds even if this set is empty.
Misère domineering on 2 × n boards | 169
2. + ⩾ℰ
3. + ⩾ℰ
4. + ⩾ℰ
Note that “splitting” a board does not always produce a better game for Left. For
example, Left does not prefer + over ; playing each in a sum with , Left likes
better.
Appendix
Using our (modest) domineering program, we have the following outcomes for m × n
domineering boards under misère play:
m n
2 3 4 5 6 7 8 9 10 11 ⩾ 12
2 𝒫 ℒ 𝒩 ℛ 𝒫 𝒩 𝒩 ℛ ℛ 𝒩 ℛ
3 ℛ 𝒫 𝒫 ℒ 𝒩 ℛ ℛ ℛ ℛ ℛ ?
4 𝒩 𝒫 𝒩 𝒫 𝒩 𝒩 𝒩 ℛ ? ? ?
5 ℒ ℛ 𝒫 𝒩 ℛ 𝒩 ? ? ? ? ?
6 𝒫 𝒩 𝒩 ℒ 𝒩 ? ? ? ? ? ?
Bibliography
[1] M. R. Allen, Peeking at partizan misère quotients, in R. J. Nowakowski (Ed.) Games of No
Chance 4, MSRI Publ. 63 (2015), 1–12.
[2] E. R. Berlekamp, Blockbusting and domineering, J. Combin. Theory Ser. A 49 (1988), 67–116.
[3] D. M. Breuker, J. W. H. M. Uiterwirjk, and J. van der Herik, Solving 8 × 8 domineering, Theoret.
Comput. Sci. (Math Games) 230 (2000), 195–206.
[4] N. Bullock, Domineering: Solving large combinatorial search spaces, ICGA J. 25 (2002), 67–84.
[5] M. Lachmann, C. Moore, and I. Rapaport, Who wins Domineering on rectangular boards? in R. J.
Nowakowski (Ed.) More Games of No Chance, MSRI Publ. 42 (2002), 307–315.
[6] U. Larsson, R. Milley, R. J. Nowakowski, G. Renault, and C. P. Santos, Recursive comparison
tests for dicot and dead-ending games under misère play, Integers 21B (2021), #A16.
[7] G. A. Mesdal and P. Ottaway, Simplification of partizan games in misère play, Integers 7 (2007),
#G6.
[8] R. Milley and G. Renault, Dead ends in misère play: the misère monoid of canonical numbers,
Discrete Math. 313 (2013), 2223–2231.
[9] T. E. Plambeck and A. N. Siegel, Misère quotients for impartial games, J. Combin. Theory Ser. A
115 (2008), 593–622.
[10] J. W. H. M. Uiterwirjk, 11 × 11 domineering is solved: the first player wins, in A. Plaat, W. Kosters,
J. van den Herik (Eds.) Computers and Games, Springer, Leiden (2016), 129–136.
Zachary Gates and Robert Kelvey
Relator games on groups
Abstract: We define two impartial games, the Relator Achievement Game REL and the
Relator Avoidance Game RAV. Given a finite group G and generating set S, both games
begin with the empty word. Two players form a word in S by alternately appending an
element from S ∪ S−1 at each turn. The first player to form a word equivalent in G to a
previous word wins the game REL but loses the game RAV. Alternatively, we can think
of REL and RAV as make a cycle and avoid a cycle games on the Cayley graph Γ(G, S). We
determine winning strategies for several families of finite groups including dihedral,
dicyclic, and products of cyclic groups.
1 Introduction
In this paper, we define two 2-player combinatorial games, the Relator Achievement
Game REL and the Relator Avoidance Game RAV. Given a finite group G and generating
set S, two players take turns choosing s or s−1 , where s is a generator from S. The only
stipulation is that if the previous player chose s, the next player cannot choose s−1
and vice versa. The player’s choice of group elements builds a word in S. The goal
of REL is to be the first player to achieve a subword equivalent to the identity in G.
The game of RAV is the misère version of REL, meaning the first player to achieve a
subword equivalent to the identity loses the game. One can play these games on the
Cayley graph of G formed by using the generating set S. Since paths in a Cayley graph
correspond to words in S, the players’ choices of generators form a path in the Cayley
graph without backtracking. Hence, when viewed graphically, the goal of REL is to be
the first player to make a cycle, whereas for RAV, the goal is to avoid cycles.
One motivation for the development of the games REL and RAV originated from
recent results by Ernst and Sieben [8] and also by Benesh [3, 4, 5, 6] for the combi-
natorial games GEN and DNG, which were first defined by Anderson and Harary [2]. In
these games, two players alternate choosing distinct elements from a finite group G
until G is generated by the chosen elements. The first player to generate the group on
Acknowledgement: The authors thank Spencer Hamblen, Andrew Kobin, and Mark Schrecengost for
providing helpful feedback and comments on an early draft of this paper. The authors also thank the
anonymous referee for their immensely helpful suggestions. In particular, their comments greatly im-
proved the clarity of Theorems 3.2, 4.2, and 4.6, and also led to the inclusion of Theorem 6.1.
Zachary Gates, Department of Mathematics & Computer Science, Wabash College, Crawfordsville,
Indiana, USA, e-mail: gatesz@wabash.edu
Robert Kelvey, Department of Mathematical & Computational Sciences, The College of Wooster,
Wooster, Ohio, USA, e-mail: rkelvey@wooster.edu
https://doi.org/10.1515/9783110755411-011
172 | Z. Gates and R. Kelvey
their turn wins the game GEN but loses DNG. Taking inspiration from this work, our goal
was to create a pair of games that incorporates the geometry of a group G through its
Cayley graph.
We have found in the current literature several combinatorial games on graphs, in-
cluding some on Cayley graphs. However, REL and RAV are distinct from these combina-
torial games. For example, Cops and Robbers (see [13, 14]), a popular pursuit-evasion
game, has been studied specifically on Cayley graphs (see [9]), and firefighting games
have been studied on Cayley graphs as well (see [10]). More recently, the Game of Cy-
cles was introduced by Su [16] and expounded by Alvarado et al. [1] This game involves
planar graphs and two players taking turns marking previously unmarked edges with
a chosen direction. The Game of Cycles is the closest of these combinatorial games to
REL and RAV, since the goal of the game is to create a cycle. However, the parameters
for doing so are very different from those in REL.
This paper is structured as follows. In Section 2, we give a precise definition (Def-
inition 2.1) of the games REL and RAV along with some examples and initial results
concerning complete bipartite and complete Cayley graphs (see Theorem 2.5 and The-
orem 2.6). In Section 3, we explore the family of dihedral groups Dn , n ≥ 3, with its
canonical generating sets. We show winning strategies for the game REL in Theorem 3.2
and for the game RAV in Corollary 3.10. Corollary 3.10 follows from the more general
result in Theorem 3.9, which applies to any group with a generating set including an
element of order 2. In Section 4, we explore the family of dicyclic groups with two
common generating sets. These are results Theorem 4.1, Theorem 4.2, Theorem 4.3,
and Theorem 4.4. In Section 5, we examine REL for products of cyclic groups ℤn × ℤm ,
where the results depend on n modulo m (Theorem 5.1). In Section 6, we discuss two
different n-player versions of REL and prove winning strategies for three-player REL
on the dihedral groups Dn in Theorem 6.1 and Theorem 6.3. Lastly, we conclude with
some open questions in Section 7.
Definition 2.1. On turn 1, Player 1 begins with the empty word w0 . Player 1 chooses
an element s1 ∈ S ∪ S−1 to create the word w1 = w0 s1 = s1 . The players then alternate
choosing elements of S ∪ S−1 . On turn n, with n > 1, the current player begins with a
word
wn−1 = s1 s2 . . . sn−1 .
Relator games on groups | 173
wn = wn−1 sn .
If a player forms wn such that wn ≡G wk , that is, wn and wk represent the same ele-
ment of G, for some k, 0 ≤ k < n, then that player wins REL(G, S) and loses RAV(G, S),
respectively. If from any position there are no legal moves, then the next player loses.
Otherwise, play passes to the next player and continues as described above.
Remark 2.2. When the group and generating set are clear from context, we will use
the shorthand REL or RAV to refer to the Relator Achievement Game or the Relator
Avoidance Game, respectively, for a group G and generating set S.
We forbid the trivial relator ss−1 in our games since every group contains these
relators, and we seek nontrivial relators. We also assume in our definition that a gen-
erating set S does not contain the identity for similar reasons.
For the trivial group and the cyclic group of order 2, with their canonical gener-
ating sets, both games end due to the eventual absence of a legal move. These are, in
fact, the only groups where this occurs.
Recall that the Cayley graph Γ(G, S) for a group G and generating set S is a graph
with vertices the elements of G and a directed edge from vertex g to vertex h if h = gs
for some s ∈ S. Such an edge would be labeled by s.
If we consider a path of edges in the Cayley graph Γ(G, S), then this will correspond
to a word w = s1 s2 . . . sn−1 sn in G with letters in S ∪ S−1 . Therefore we can visually
play the games of REL and RAV on a Cayley graph: a players’ choice of element sn ∈
S ∪ S−1 will correspond to traversing an undirected edge in the Cayley graph. A player
wins REL if they are the first to form a cycle (a relator) in the Cayley graph. Likewise,
a player loses RAV if they are the first to form a cycle in the Cayley graph. The rule
stating that a player may not choose the inverse of the last generator chosen translates
to disallowing backtracking in the Cayley graph.
We mention this visual Cayley graph correspondence as a useful way to analyze
the games REL and RAV. It can be helpful to play these games on a Cayley graph to
understand the winning strategies for different groups and generating sets. Note that
due to how players choose elements from S ∪ S−1 whenever we discuss Cayley graphs,
we refer to the undirected Cayley graph.
Example 2.3. Consider REL(ℤn , {1}), where ℤn denotes the additive group of integers
modulo n with n > 2. The corresponding Cayley graph for (ℤn , {1}) is an n-sided poly-
gon. Hence the games REL(ℤn , {1}) and RAV(ℤn , {1}) are completely determined by the
parity of n. If n is even, then Player 2 will win REL, and Player 1 will win RAV. If n is odd,
then Player 1 will win REL, and Player 2 will win RAV.
Example 2.4. Consider the quaternion group Q8 with generating set S = {i, j}. We can
investigate the game REL(Q8 , S) by means of the Cayley graph Γ(Q8 , S). See Figure 11.1,
174 | Z. Gates and R. Kelvey
where the labels i and j are denoted by blue and red, respectively. Note that Γ(Q8 , S) is
a complete bipartite graph; that is, the vertices can be partitioned into two sets A and
B such that for any two vertices a ∈ A and b ∈ B, there is an edge joining a and b, and
for any two elements from the same set, there is no edge between them. In this case,
we have A = {±1, ±k} and B = {±i, ±j}. The set B is shaded in Figure 11.1.
To determine a winning strategy for REL(Q8 , S), note that Player 1 must choose
from the set B on their first turn. Player 2 cannot backtrack to 1 and so must move
to a vertex from A − {1}. Next, Player 1 moves to another vertex from B distinct from
their previous choice and thus cannot win on this turn. Finally, Player 2 wins on their
second turn by moving back to 1.
1 i
−1 −i
−j −k
j k
Figure 11.1: Cayley Graph for Q8 with generating set {i, j}.
Proof. The proof that Player 2 wins REL(G, S) follows the same argument as in Exam-
ple 2.4.
For RAV(G, S), the game is one of exhaustion. If |G| = 2n, then Player 1 has n possi-
ble vertices to move to on their first turn, whereas Player 2 has n − 1 options due to the
game starting at the identity. In general, Player 1 has n−k vertex options after their kth
turn, whereas Player 2 has n − k − 1 vertex options after their kth turn. These options
always exist because Γ is complete bipartite. Hence Player 2 will exhaust their options
before Player 1, and thus Player 1 wins RAV(G, S).
For any nontrivial finite group G, if we let S = G − {e}, then Γ(G, S) is a complete
graph. Such a case is also easy to analyze.
Relator games on groups | 175
Theorem 2.6. If G is a finite group of order at least 3, and S is a generating set such that
Γ(G, S) is a complete graph, then Player 1 wins REL(G, S). Player 1 wins RAV(G, S) if |G| is
even, and Player 2 wins RAV(G, S) if |G| is odd.
Proof. If Γ(G, S) is complete and |G| ≥ 3, then Player 1 wins REL(G, S) on their sec-
ond turn by moving back to e since Player 2 may not backtrack to e on their first
turn. RAV(G, S) is a game of exhaustion as in the complete bipartite case. If |G| is even,
then Player 1 will complete a Hamiltonian path in Γ(G, S) on turn |G| − 1 and thus win
RAV(G, S) since Player 2 will have no available moves on the next turn. If |G| is odd,
then Player 2 wins by completing a Hamiltonian path for the same reason.
Although generating sets that yield complete bipartite or complete Cayley graphs
allow for quick analysis of REL and RAV, they are rarely canonical generating sets for
groups. In this sense, Q8 is an outlier with its canonical generating set yielding a com-
plete bipartite Cayley graph.
We close this section with an answer to a natural question. Suppose two groups G
and H have isomorphic undirected Cayley graphs Γ(G, S) and Γ(H, T). Are the games
REL and RAV the same for both groups? The answer is yes. If a winning strategy dictates
a player move along the edge from g to gs in Γ(G, S), then the same player has a winning
strategy on the other group by moving along the corresponding edge in Γ(H, T). We
state this explicitly as the following theorem.
Theorem 2.7. Suppose Γ(G, S) and Γ(H, T) are isomorphic as undirected Cayley graphs.
A player has a winning strategy for REL(G, S) (respectively, RAV(G, S)) if and only if that
player has a winning strategy for REL(H, T) (respectively, RAV(H, T)).
We provide an example of this result in Example 3.1 at the beginning of the next
section.
3 Dihedral groups
For the dihedral groups Dn of order 2n with n ≥ 3, there are two common generating
sets: one is the Coxeter generating set composed of two reflections; the other is com-
posed of one reflection and one rotation. First, we examine the Coxeter generating set.
Example 3.1. Suppose S = {s, t} is a Coxeter generating set for the dihedral group Dn ,
that is,
In this case the games REL(Dn , S) and RAV(Dn , S) have the same outcomes as
REL(ℤ2n , {1}) and RAV(ℤ2n , {1}) since the undirected Cayley graphs Γ(ℤ2n , {1}) and
Γ(Dn , {s, t}) are isomorphic (see Theorem 2.7).
176 | Z. Gates and R. Kelvey
Hence we focus our attention for the rest of this paper on the following presenta-
tion for the dihedral groups:
Theorem 3.2. If n is odd, then Player 1 has a winning strategy for REL(Dn , {r, s}). If n
is even, then Player 1 has a winning strategy if n ≡ 2 mod 6, whereas Player 2 has a
winning strategy otherwise.
Before we begin the proof, we provide some remarks and an auxiliary lemma.
Remark 3.3. The Cayley graph Γ(Dn , {r, s}) contains n “squares”, each corresponding
to the relation rsrs = e. Given the normal form r i sj , where 0 ≤ i ≤ n − 1 and 0 ≤
j ≤ 1, we number the squares in increasing order by i. Square 1 contains {e, s, r, rs},
Square n contains {r n−1 , r n−1 s, e, s}, and, in general, Square i contains {r i−1 , r i−1 s, r i , r i s}.
See Figure 11.2 and Figure 11.3.
r i−1 ri
r i−1 s ri s
Remark 3.4. If two edges of a square have already been traversed, then neither player
will move along a third edge of that square unless it is a winning play since traversing
a third edge sets up the opposing player to win on their next turn.
Because of the previous two remarks, once the first r or r −1 edge is chosen, the
players will move in one direction, clockwise or counterclockwise, along the Cayley
graph until a cycle is completed.
Remark 3.5. If a player chooses s, then the next two moves (if they exist) are both
determined by Remark 3.4 and therefore must either both be r or both be r −1 .
Relator games on groups | 177
5 s 1
r4 r
r4 s rs
4 2
3 2
r s r s
3
r3 r2
Definition 3.6. We say that a player enters Square i at vertex g ∈ {r i−1 , r i , r i−1 s, r i s} on
turn k if their choice of sk ∈ S ∪ S−1 yields wk ≡G g, and if wj ≡G h for some j < k, then
h ∉ {r i−1 , r i , r i−1 s, r i s}.
When the context is clear, we will state that a player has entered a square without
referring to the specific turn.
The following lemma will be used in the proof of Theorem 3.2. Note that the ap-
pearance of the modulo 6 condition in Theorem 3.2 is due to this lemma.
Lemma 3.7. For REL(Dn , {r, s}), suppose all moves have occurred on squares 1 through
k − 3 for some k such that 4 ≤ k ≤ n. If 5 ≤ k ≤ n and a player enters square k − 3 at the
vertex r k−4 s, then that player can guarantee entering square k at vertex r k−1 . If 4 ≤ k ≤ n
and a player enters square k − 3 at vertex r k−4 , then that player can guarantee entering
square k at vertex r k−1 s.
Proof. We assume that all prior moves have occurred on squares 1 through k − 3 and
that 5 ≤ k ≤ n. Let A, B ∈ {1, 2} with A ≠ B. Suppose that Player A enters square k − 3 at
the vertex r k−4 s, which must be done via a choice of r −1 . We then have two cases since
Player B may either play r −1 to move to r k−3 s or s to move to r k−4 .
If Player B chooses r −1 , then Player A will follow by choosing s to move to r k−3 .
The next two moves are then forced by Remark 3.5 if both players are to avoid making
the third edge on a square. Hence Player B will move to r k−2 , and Player A will move to
r k−1 , entering square k at this vertex.
If Player B chooses s, then the next two moves are forced, so Player A moves to
r k−3 , and Player B moves to r k−2 . Player A then has the option to choose r and enter
square k at r k−1 .
Now suppose 4 ≤ k ≤ n and that Player A enters square k − 3 at vertex r k−4 . For
k = 4, we note that the player enters at r 0 = e on the 0th move of the game. For k ≥ 5,
this must be done by playing r. The rest of the proof is similar to the previous case.
178 | Z. Gates and R. Kelvey
Remark 3.8. Allowing for repeated use of Lemma 3.7, we can effectively expand our
options for moves to the set {r, r −1 , s, t(α), u(α)} for α ≥ 1 instead of {r, r −1 , s}, where
r i sj t(α) = r i+1+3α sj+α and r i sj u(α) = r i+3α sj+α . Note that u(α) is a move available only to
Player 2 from the case k = 4 in Lemma 3.7 by entering Square 1 at r 0 = e on their 0th
move. We further note that by the proof of Lemma 3.7 the position prior to r i+1+3α sj+α
is r i+3α sj+α .
Proof of Theorem 3.2. First, we suppose without loss of generality that r is played be-
fore r −1 . A consequence of this and Remark 3.4 is that i is nondecreasing in the normal
form r i sj , where 0 ≤ i ≤ n − 1 and 0 ≤ j ≤ 1. Suppose n is odd. Then Player 1’s strategy
is to play r from vertices r k where 0 ≤ k ≤ n − 1 and r −1 from vertices r k s (see Figure 11.4
for an example). Since elements of S ∪ S−1 change the parity of exactly one of i and j
in the normal form by exactly one, we note that Player 1 moves only to elements r a sb
where a + b is odd, whereas Player 2 moves only to elements r c sd where c + d is even.
By Player 1’s strategy the power of r is strictly increasing after every two moves. Hence
we will eventually move to r n s0 = e or r n s1 = s. If the game moves to r n s0 = e, then
the game is over, and Player 1 wins since n + 0 is odd. If the game moves to r n s1 , then
Player 1 moves next since n + 1 is even. Then Player 1 plays s to win at e.
r6 r
r6 s rs
r5 s r2 s
r5 r2
4 3
r s r s
r4 r3
Figure 11.4: Example of Player 1 strategy for REL(D7 ) as described in Theorem 3.2. Player 1’s moves
are colored “red”, and Player 2’s moves are colored “blue”. Player 1 only needs to choose r or r −1
generators to reach a winning position. Note that when Player 2 chooses an s, the next two moves
are forced by Remark 3.4. Regardless of Player 2’s next move, Player 1 will win the game.
Relator games on groups | 179
Now suppose n is even. If Player 1 begins by playing r, then Player 2 wins by the same
strategy as in the case where n is odd. Hence we may assume that Player 1 begins by
playing s. In this case a player wins by moving to r n = e or r n s = s. Repeated use of
Lemma 3.7 allows us to expand our move set to {r, r −1 , s, t(α), u(α)} with t(α) and u(α) as
defined in Remark 3.8. We recall from Remark 3.8 that the position immediately prior
to r i+3α sj+α is r i+3α−1 sj+α . We now split into three cases:
1. Suppose that n = 6k + 2 for some k (see Figure 11.5). Player 1 first moves to s, and
Player 2 is forced, without loss of generality, to move to rs. Player 1 can now play
t(2k) to move from rs to r 1+1+3(2k) s1+2k = r 6k+2 s = r n s = s with the penultimate
position being r n−1 s. Thus Player 1 wins.
2. Suppose that n = 6k for some k. Player 2 effectively moves to r 0 s0 = e on the 0th
move, which is the start of Player 2 using u(2k) to move to r 3(2k) s2k = r 6k s0 = r n = e,
with the penultimate position being r n−1 . Thus Player 2 wins.
3. Suppose that n = 6k + 4 for some k. We assume that Player 1 moves to s. Then
Player 2 uses t(2k + 1) to move from s to r 1+3(2k+1) s1+(2k+1) = r 6k+4 s2k+2 = r n = e, with
the penultimate position being r n−1 . Thus Player 2 wins.
e
r n−1
r n−2 .. r
.
r n−1 s s
r n−2 s rs
..
.
r n−3 r n−3 s
r n−4 s
..
.
r n−4
..
.
Figure 11.5: Portion of a general Cayley graph for Dn with moves as in Theorem 3.2, the even case
with n ≡ 2 mod 6. By Lemma 3.7, if Player 1 enters square n − 2 at r n−3 , then they can guarantee
reaching vertex s.
180 | Z. Gates and R. Kelvey
r9 e
r8 r
9
r s s
8
r s rs
r7 r7 s r2 s r2
r6 s r3 s
r5 s r4 s
r6 r3
r5 r4
Figure 11.6: Example of Player 1 winning strategy for RAV(D10 ). Player 1 moves are colored “red”, and
Player 2 moves are colored “blue”. Player 1’s strategy is to always choose the generator s.
Theorem 3.9. Let G be a finite group with generating set S containing an element s of
order 2. Then Player 1 has a winning strategy for the game RAV(G, S).
Proof. The winning strategy of Player 1 is to always choose the order two generator s.
Since Player 2 can never choose s due to backtracking, they are forced to choose an-
other element of S.
We first show that a choice of s exists on each turn for Player 1. Indeed, suppose it
is Player 1’s turn and no such choice is available. Let v denote the vertex in the Cayley
graph Γ(G, S) representing this point in the game. Because Player 1 has no choice of s
available, this means that the edge labeled s from vertex v has been traversed previ-
ously. But then the vertex v must have been visited previously, meaning that Player 2’s
last move arriving at v was in fact a losing move for Player 2. Hence, if the choice of
generator s is not available for Player 1, then Player 2 already lost the game.
We now show that Player 1’s strategy is a winning strategy. Suppose for contradic-
tion that Player 1 choosing s to move from the vertex v to the vertex w is a losing move;
that is, this forms the first cycle in the Cayley graph. This means that w has previously
Relator games on groups | 181
been visited. In the case that Player 2 reached w the previous time, Player 1’s strategy
implies that they would move to v via choosing s. Hence Player 2 actually formed a
cycle by moving to v for the second time, a contradiction.
In the case that Player 1 reached w the previous time, it was from the vertex v, so
Player 2 again must have formed a cycle by moving to v for the second time.
Corollary 3.10. Player 1 has a winning strategy for RAV(Dn , {r, s}) for any n ≥ 3.
Example 3.11. Let H be a finite group with generating set T, and let {e, s} = ⟨s⟩ ≅ ℤ2
be a cyclic group of order two with generator s. Suppose G = H ⋊ ℤ2 with canonical
generating set
Then Theorem 3.9 implies that Player 1 has a winning strategy for RAV(G, S) by always
choosing the generator (eH , s). This applies in particular to the family of generalized
dihedral groups, which are defined as the groups G = H ⋊ ℤ2 where H is an Abelian
group and the action of ℤ2 on H is that of inversion.
Remark 3.12. Suppose that G is a group of even order. Then G must contain an element
of order two. It follows from Theorem 3.9 that there exists a generating set S for which
Player 1 has a winning strategy for the game of RAV(G, S).
From the defining relations we can show that any g ∈ Dicn can be written in a nor-
mal form ai xj with 0 ≤ i < 2n and j ∈ {0, 1}, and with the following relations:
ak aℓ = ak+ℓ
ak aℓ x = ak+ℓ x
ak xaℓ = ak−ℓ x
ak xaℓ x = ak−ℓ+n .
The winning strategy for the game RAV(Dicn , {a, x}) is similar to that found in The-
orem 3.9. We note here that the generator x is not of order two; however, it plays a
182 | Z. Gates and R. Kelvey
similar role to that of the order-two generator from Theorem 3.9; that is, in the normal
form for elements of Dicn , the possible powers of x are either 0 or 1. Hence, although
x has order four, it acts like an element of order two in the normal form.
Theorem 4.1. Player 1 has a winning strategy for RAV(Dicn , {a, x}).
Proof. Note that for n = 2, we have Dic2 = Q8 . Hence this case is covered by Exam-
ple 2.4. For the remainder of the proof, suppose n ≥ 3. See Figure 11.7 for a Cayley
graph of Dic4 .
e a
a7 a2
a6 a3
a5 a4
Figure 11.7: Cayley graph for Dic4 with generators a and x. The “blue” edges correspond to the gen-
erator a and the “red” edges to the generator x. On the inner octagon, if the vertices are labeled x,
ax, a2 x, . . . , a7 x in a clockwise order, then a choice of the generator a will move a player counter-
clockwise.
Using the normal form described above, Player 1 has a winning strategy by moving
on each of their turns from ak x to ak by choosing x −1 or from ak to ak x by choosing x,
where k is an integer such that 0 ≤ k < 2n. Such a move is always available to Player 1,
which can be shown via an inductive argument. We will show explicitly the base case
and Player 1’s second turn.
For the base case, Player 1 starts with a choice of x, that is, they move from e = a0
to a0 x = x.
For Player 1’s second turn, based off Player 2’s choice of generator, we have three
possible game words:
Theorem 4.2. Player 1 has a winning strategy for REL(Dicn , {a, x}) for odd n.
x aℓi
a2n−1 x
aℓi x
an+ki+1
aki
ali+1
an+ki
an+ℓi x an−2 x aki+1
an−1 x
an+ℓi an x
an−2
an−1
an
Figure 11.8: A simplified way to visualize the Cayley graph of Dicn , which is composed of two 2n-
gons, represented here by concentric circles. The generator a moves one clockwise on the outer
circle but counterclockwise on the inner circle. The generator x moves one from the inner to outer
circle or vice versa.
We now show that Player 2 must eventually choose again from {x, x −1 } to arrive at either
ak0 or an+k0 for some even integer k0 such that ℓ0 + 1 ≤ k0 ≤ n − 1. This is broken into
two cases depending on Player 2’s move from aℓ0 .
1. Suppose Player 2 chooses x to move from aℓ0 to aℓ0 x. Then Player 1 will choose a−1
until Player 2 chooses anything other than a−1 .
Should Player 2 only choose a−1 , then Player 2 arrives first at an−2 x since ℓ0 is odd.
Then Player 1 moves to an−1 x. By Table 11.2 we see that Player 2 is forced to choose
x−1 to arrive at an−1 . Then Player 1 can move first from an−1 to an , and by Table 11.2
again, Player 2 must move to an+1 to prevent losing. Player 1 will continue to choose
a until Player 2 chooses anything other than a. If play continues in this way, then
Player 2 will be the first to reach the vertex an+ℓ0 , since ℓ0 is odd. Then Player 1
would win by playing x−1 to move to aℓ0 x. Hence, for some even m with 2 ≤ m ≤
l0 − 1, Player 2 must choose xϵ with ϵ ∈ {±1} to move from an+m to an+m x ϵ . From
here Player 1 chooses xϵ to arrive at am in either case. Because 2 ≤ m ≤ ℓ0 − 1,
Player 1 will win.
Relator games on groups | 185
Table 11.1: The sequence of moves if Player 2 moves to an−3 . Then Player 1 moves to an−2 via a. Re-
gardless of how Player 2 proceeds from an−2 , Player 1 ends up back at e or an−2 to win.
x a−1 a−1 x
an−2 → an−2 x → an−1 x → an x → an x 2 = a2n = e
x −1 a−1
→ an−1 → an−2
x a
→ a2n−1 → a2n−1 a = a2n = e
x −1 a−1 a−1 x −1
an−2 → a2n−2 x → a2n−1 x → a2n x = x → e
x a−1
→ an−1 → an−2
x −1 a
→ a2n−1 → a2n = e
Table 11.2: The first row details the sequence of possible moves should Player 2 move to an−2 x. The
second row is a continuation of the third move sequence in row one. Note that Player 1 will either
win or move to an+2 via choosing a.
We have thus shown above our base case: Player 2 must move to ak0 or an+k0 for some
even k0 such that ℓ0 + 1 ≤ k0 ≤ n − 1, where 1 ≤ ℓ0 ≤ n − 4. Now let us assume that
Player 2 has moved to aki or an+ki for some even ki such that ℓi + 1 ≤ ki ≤ n − 1, where
ℓi is an odd integer satisfying 1 ≤ ℓi ≤ n − 4.
– Suppose Player 2 moves to aki ; then Player 1 chooses a to move to aki +1 . Note that if
ki = n − 3, then Player 1 has a winning sequence of moves by Table 11.1. If ki = n − 1,
then by Table 11.2 Player 2 is forced to move to an+1 , and Player 1 will win by the
same argument in (1). Otherwise, we obtain an odd ℓi+1 such that ki +1 ≤ ℓi+1 ≤ n−4.
By the same reasoning as above, we can obtain an even ki+1 such that ℓi+1 + 1 ≤
ki+1 ≤ n − 1.
– If Player 2 moves instead to an+ki , then Player 1 will continue to play a until Player 2
chooses from {x, x−1 }. Because ki is even and n is odd, Player 1 will be the first to
move to an+n = e if ki = n − 1, or Player 2 only chooses a. Otherwise, Player 2
chooses from {x, x−1 } to move from an+ℓi+1 to an+ℓi+1 x or an+ℓi+1 x −1 = aℓi+1 x for some
odd ℓi+1 such that ki + 1 ≤ ℓi+1 ≤ n − 4 ≤ n − 2. Similarly to the previous cases (1)
and (2), we will show that Player 2 must move to aki+1 or an+ki+1 for some even ki+1
such that ℓi+1 + 1 ≤ ki+1 ≤ n − 1.
– Suppose Player 2 chooses x to move to an+ℓi+1 x for some odd ℓi+1 such that
ki + 1 ≤ ℓi+1 ≤ n − 2. Then Player 1 will continue to play a−1 until Player 2
chooses from {x, x−1 } or Player 2 reaches an+n x = x, in which case Player 1
wins at e by playing x−1 . Hence Player 2 moves to either (an+ki+1 x)x = aki+1 or
(an+ki+1 x)x−1 = an+ki+1 for some even ki+1 such that ℓi+1 + 1 ≤ ki+1 ≤ n − 1.
– Suppose Player 2 chooses x−1 to move to aℓi+1 x for some odd ℓi+1 such that
ki + 1 ≤ ℓi+1 ≤ n − 2. Then Player 1 continues to play a−1 until Player 2 chooses
from {x, x−1 } or until Player 1 reaches an−1 x. Player 1 reaches an−1 x first because
both ℓi+1 + 1 and n − 1 are even. Note that by Table 11.2, if Player 1 moves to
an−1 x, then either Player 1 wins, or Player 2 moves to an+1 . Then Player 1 can
eventually move to either an+ℓ0 x or an+ℓ0 x −1 = aℓ0 , or Player 1 can move to
an+m xϵ xϵ = am after Player 2 moves to an+m x ϵ for some ϵ ∈ {±1} and 2 ≤ m ≤
ℓ0 − 1. In either case, one of an+ℓ0 x, aℓ0 , or am with 2 ≤ m ≤ ℓ0 − 1 has been
visited previously, and hence Player 1 wins. Hence, as in the previous case,
we can assume that Player 2 moves from aki+1 x to either (aki+1 x)x = an+ki+1 or
(aki+1 x)x−1 = aki+1 for some even ki+1 such that ℓi+1 + 1 ≤ ki+1 ≤ n − 1.
We have shown that either Player 1 wins or we can generate a strictly increasing se-
quence of positive integers (ki ) satisfying ki ≤ n − 1 for all i, since ki < ℓi+1 < ki+1 . As
the set of positive even integers less than or equal to n − 1 is finite, there must exist an
integer j such that kj = n − 1. Thus Player 2 eventually arrives at either a2n−1 or an−1 .
The choice of a to move from a2n−1 to a2n = e is a win for Player 1. By Table 11.2 Player 1
can choose a to move from an−1 to an , which, as argued previously, leads to a Player 1
win. Therefore we conclude that Player 1 wins REL(Dicn , {a, x}) for odd n ≥ 3.
Relator games on groups | 187
Theorem 4.3. Player 2 has a winning strategy for REL(Dicn , {a, x}) when n is even.
Proof. Player 2 has a winning strategy via mirroring Player 1; that is, if Player 1 chooses
a generator s on their turn, Player 2 will follow with s on their turn. Since x 2 = (x −1 )2 =
an and n is even, we note that this strategy implies that Player 1 only lands at aℓ , where
ℓ is odd, or at ak x, where k is even. Meanwhile, Player 2 will only land at ak with even k.
To show that Player 2 has a winning strategy, we assume for contradiction that
Player 1 has a winning strategy. There are two cases: Player 1 can win either at aℓ for ℓ
odd or at ak x for k even as stated above.
First, suppose that Player 1 wins at aℓ with odd ℓ. The first time that Player 1 ar-
rived at aℓ must have been from either aℓ−1 or aℓ+1 . By Player 2’s mirroring strategy,
Player 2 would have then moved to the other. Thus, upon reaching aℓ for the second
time, Player 1 must have moved from aℓ−1 or aℓ+1 , both of which would have been vis-
ited for a second time. Hence Player 2 won the previous turn, a contradiction.
Now suppose that Player 1 wins at ak x with even k. The first time that Player 1
arrived at ak x must have been from ak or an+k . Player 2 then would have moved to
the other. Thus, upon reaching ak x for the second time, Player 1 would have again
moved from ak or an+k , both of which would have been visited for a second time. Hence
Player 2 won the previous turn, a contradiction.
Note that the triangle presentation for Dicn is isomorphic to that given in Sec-
tion 4.1 via the mapping
a → a, x → b−1 , ax −1 → c.
Hence we can describe group elements via the normal form ai bj with 0 ≤ i < 2n and j ∈
{0, 1}. For the proofs that follow, we will make use of this normal form. See Figure 11.9
for an example of the Cayley graph of Dic4 with generating set {a, b, c}.
For the game of RAV(Dicn , {a, b, c}), we have the same result as Theorem 4.1.
Theorem 4.4. Player 1 has a winning strategy for RAV(Dicn , {a, b, c}).
Proof. Player 1 has a winning strategy by choosing b on their first turn and then mov-
ing from ak b to ak by choosing b−1 or moving from ak to ak b by choosing b on their
subsequent turns. The only addition to the previous argument of Theorem 4.1 is ac-
counting for the generator c. Because c = ab, we have bc = bab ≡Dicn an−1 and
188 | Z. Gates and R. Kelvey
e a
a7 a2
a6 a3
a5 a4
Figure 11.9: Cayley graph for Dic4 with generators a, b, and c. The “blue” edges correspond to the
generator a, the “red” to the generator b, and the “green” edges to c. We have a normal form ai bj
with 0 ≤ i < 2n and j ∈ {0, 1}, and the inner vertices are labeled b, ab, ab2 , . . . , ab7 in a clockwise
manner.
bc−1 = b(b−1 a−1 ) ≡Dicn a2n−1 . Note that in both cases, Player 1 can choose b on their
next turn to move to an−1 b or a2n−1 b, respectively, thus extending the argument given
in Theorem 4.1 for Player 1’s second turn. The rest of the proof follows exactly as in
Theorem 4.1.
Despite the addition of the third generator, we can see that Player 2 has the same
winning strategy for REL(Dicn , {a, b, c}) as they did for REL(Dicn , {a, x}) when n is even.
Theorem 4.5. Player 2 has a winning strategy for REL(Dicn , {a, b, c}) with even n.
The same argument as described in Theorem 4.3 applies here. Recall that we can
describe every element of Dicn via the normal form ai bj where 0 ≤ i < 2n and j ∈ {0, 1}.
From Theorem 4.3 we know that Player 1 can arrive at words equivalent to ak b for k
even and aℓ for ℓ odd. However, with the addition of the generator c, Player 1 can
also arrive at words equivalent to aℓ b for ℓ odd. Player 2 still can only arrive at words
equivalent to ak for k even. We leave the remaining details to the reader.
As opposed to the game REL(Dicn , {a, x}), Player 2 has a winning strategy for all
n ≥ 2 by Theorem 4.6. Note that we can still use Figure 11.8 as a visual aid for the proof
of Theorem 4.6 by replacing x with b and using that c = ab.
There are several relators of length three in Dicn , which will be used throughout
the proof of Theorem 4.6. These have been collected into Table 11.3 for reference.
Theorem 4.6. Player 2 has a winning strategy for REL(Dicn , {a, b, c}) with odd n.
Relator games on groups | 189
Table 11.3: Twelve relators of length three in Dicn with generating set {a, b, c}. We refer to these rela-
tors as triangle relators because in the Cayley graph of Dicn , they form triangles.
Triangle Relators
Proof. We will first show that Player 1 cannot win if Player 1 begins the game with b±1
or c±1 .
1. Without loss of generality,1 suppose Player 1 chooses b on their first turn. Then
Player 2 will also play b to move to b2 = an . Then by the move sequences shown
in Table 11.4 we see that Player 2 wins unless Player 1 chooses a.
Player 2’s strategy now is to continue playing a until Player 1 plays anything other
than a or moves to a2n−2 , which must happen because n is odd.
Table 11.4: The sequence of moves if Player 1 first plays b. Player 2 will mirror them and move to
b2 = an . Observe that Player 2 wins unless Player 1 chooses a.
– If Player 1 moves to a2n−2 , then Player 2 will choose c−1 to move to a2n−2 c−1 =
an−1 b. By the calculations in Table 11.5 we may assume that Player 1 moves to
an−2 by choosing c−1 .
Since n is odd, Player 2 will continue to choose a−1 and be able to move to
a0 = e and win unless Player 1 chooses from {b±1 , c±1 }. Thus let us assume
that Player 1 chooses to play z ∈ {b±1 , c±1 } to move from aℓ to aℓ z for some
even ℓ such that 2 ≤ ℓ ≤ n − 3. Then Player 2 will play z to move from aℓ z to
aℓ z 2 = an+ℓ . Since every vertex am , n ≤ m ≤ 2n − 2, has been visited before,
Player 2 wins.
1 For the case where Player 1 chooses c, simply make the following changes: b changes to c, c to b,
and a±1 to a∓1 .
190 | Z. Gates and R. Kelvey
Table 11.5: The sequence of moves if Player 1 moves to a2n−2 . Note that all move sequences yield
Player 2 winning except the last row.
Table 11.6: The top part shows the sequence of moves after the game proceeds through am for
n ≤ m ≤ k and Player 1 moves from an+k to an+k z for z ∈ {b±1 , c ±1 }. Player 2 wins unless Player 1
chooses c or c−1 , both of which move the game to ak . The bottom part shows the sequence of moves
following Player 1’s choice of cϵ for some ϵ ∈ {±1}. Note that the only option that does not immedi-
ately yield a Player 2 win is for Player 1 to choose a−1 .
– Suppose that Player 1 plays z ∈ {b±1 , c±1 } to move from an+k to an+k z for some
even k such that 2 ≤ k ≤ n − 3. Because of the triangle relators (Table 11.3),
we can see from the top part of Table 11.6 that Player 2 will win unless z is c
or c−1 . Regardless, Player 2 will move the game to ak . From the bottom part of
Table 11.6 we see that the only possible choice for Player 1 is to choose a−1 to
move to ak−1 , with Player 2 also choosing a−1 on the next turn to move to ak−2 .
Player 2 will continue to choose a−1 until Player 1 chooses something other
than a−1 . Should Player 1 choose only a−1 , Player 2 will first reach ak−k = a0 =
e, since k is even. Hence, for some even ℓ, 2 ≤ ℓ ≤ k −2 < n−3, Player 1 chooses
Relator games on groups | 191
y ∈ {b±1 , c±1 } to move from aℓ to aℓ y, but since y2 = an , Player 2 will also choose
y to move from aℓ y to aℓ y2 = an+ℓ , which has been previously visited.
2. Having shown that Player 1 loses if they choose b or c to start, Player 1 will choose
from {a, a−1 } on their first turn. Without loss of generality, suppose Player 1
chooses a. Then Player 2 will choose a until Player 1 chooses something other
than a. Should Player 1 only choose a, Player 2 will first reach a2n = e because 2n
is even. Hence we assume that Player 1 chooses z in {b±1 , c±1 } to move from aℓ to
aℓ z for some even ℓ such that 2 ≤ ℓ ≤ 2n − 2. We have two cases depending on
whether ℓ > n or ℓ < n.
– Suppose n < ℓ ≤ 2n − 2. Let ℓ = n + k where 1 ≤ k ≤ n − 2. After Player 1 chooses
z ∈ {b±1 , c±1 } to move from an+k to an+k z, Player 2 will choose z to move from
an+k z to an+k z 2 = a2n+k = ak , since z 2 = an for all z ∈ {b±1 , c±1 }. The vertex ak
has previously been visited, and hence Player 2 wins.
– Suppose 2 ≤ ℓ < n. By Table 11.7 we see that only a choice of z = cϵ , ϵ ∈ {±1} is
possible for Player 1, hence moving from aℓ to aℓ cϵ .
If ℓ = n − 1, then Player 2 wins by choosing bϵ to move to an−1 (cϵ bϵ ) =
an−1 an+1 = e. Hence we assume that 2 ≤ ℓ ≤ n − 3. In this case, Player 2 will
subsequently move to an+ℓ . Continuing from the second half of Table 11.7,
we see that Player 1 must choose a−1 to move to an+ℓ−1 . Then Player 2 will
continue to choose a−1 until Player 1 moves to aℓ+2 , which happens because
ℓ is even and n is odd; or until Player 1 plays something other than a−1 . We
examine each case further in items (a) and (b), respectively.
Table 11.7: The lines of play if Player 1 chooses a on their first turn and Player 2 mirrors them until
Player 1 moves from aℓ to aℓ z for some even ℓ, 2 ≤ ℓ ≤ n − 3, and z ∈ {b±1 , c ±1 }. Note that Player 2
wins unless Player 1 chooses either c or c−1 . Following this further, the bottom part shows Player 1’s
options after Player 2 moves to an+ℓ above. The only option that does not result in a Player 1 loss
is a−1 .
c −1
z=b aℓ b = aℓ−1 (ab) → aℓ−1 (abc−1 ) = aℓ−1
c
z = b−1 aℓ b−1 = aℓ−1 (ab−1 ) → aℓ−1 (ab−1 c) = aℓ−1
cϵ
z = cϵ aℓ cϵ → aℓ c2ϵ = an+ℓ
z
aℓ cϵ cϵ = an+ℓ → an+ℓ z
cϵ
z = cϵ an+ℓ cϵ → an+ℓ c2ϵ = a2n+ℓ = aℓ
bϵ
z = bϵ an+ℓ bϵ → an+ℓ b2ϵ = a2n+ℓ = aℓ
b−ϵ
z=a an+ℓ a → an+ℓ (ab−ϵ ) = aℓ cϵ (cϵ ab−ϵ ) = aℓ c ϵ
a−1
z = a−1 an+ℓ a−1 → an+ℓ a−2 = an+ℓ−2
192 | Z. Gates and R. Kelvey
Table 11.8: If play proceeds from an+ℓ−2 as in Table 11.7 with only a−1 chosen by both players, then
Player 2 will choose b−1 from aℓ+2 . Note that the next move for Player 1 must be b−1 to move to an+ℓ+2
as all others will be a loss.
(a) Suppose play has continued from an+ℓ−1 with both players choosing only
a−1 until Player 1 moves from aℓ+3 to aℓ+3 a−1 = aℓ+2 . Then Player 2 will
play b−1 to move to aℓ+2 b−1 = an+ℓ+2 b. From Table 11.8 we see that any
choice other than b−1 for Player 1 leads to a loss; hence Player 1 moves
from an+ℓ+2 b to an+ℓ+2 .
In a manner similar to the earlier case (first subcase of (1) above), Player 2
will continue to choose a until Player 1 chooses something other than a.
If ℓ = n − 3, then Player 2 immediately wins by moving from an+ℓ+2 = a2n−1
to a2n = e. Thus we may assume that 2 ≤ ℓ ≤ n − 5. If Player 1 only chooses
a, then Player 2 will win at a2n = e since n + ℓ + 3 is even. Hence Player 1
must move from an+m to an+m y for some odd m, ℓ + 3 ≤ m ≤ n − 2, and
y ∈ {b±1 , c±1 }. Then Player 2 will play y to move to an+m y2 = a2n+m = am
since y2 = an for all y ∈ {b±1 , c±1 }. Since every vertex am , ℓ + 3 ≤ m ≤ n − 2,
has been previously visited, Player 2 wins.
(b) Suppose Player 1 plays z ∈ {b±1 , c±1 } to move from ak to ak z for some odd
k such that ℓ + 3 ≤ k ≤ n + ℓ − 2. We see from Table 11.9 that Player 2 wins
if z ∈ {c±1 }. If k ≥ n, then Player 2 wins if z ∈ {b±1 } as well since an+k has
already been visited in this case.
Now consider the case where k ≤ n − 2 and hence where an+k has not
been visited. By the second part of Table 11.9 we see that Player 2 wins
unless Player 1 chooses a. Player 2 will now choose a until Player 1 chooses
otherwise. If k = n − 2, then Player 2 wins by choosing a to move from
a2n−1 to a2n = e. Otherwise, since n + k + 2 is even, Player 2 will win at
a2n = e unless Player 1 chooses some y ∈ {b±1 , c±1 } to move from an+j to
an+j y where k + 2 ≤ j ≤ n − 2. In this case, Player 2 will mirror to move
to an+j y2 = an+j an = aj , which has already been visited. Thus Player 2
wins.
Relator games on groups | 193
Table 11.9: The sequence of moves if play proceeds from an+ℓ−2 as in Table 11.7 until Player 1 chooses
z ∈ {bϵ , cϵ }, ϵ ∈ {±1} to move from ak+1 a−1 = ak to ak z. Note that Player 1 must choose z = bϵ by the
relators in Table 11.3. Following this further, the bottom part shows Player 1’s options after Player 2
moves to an+k above. The only option that does not result in a Player 1 loss is a.
By Theorem 3.2 and the fact that the undirected Cayley graphs of (Dn , {r, s}) and
(ℤn ×ℤ2 , {(1, 0), (0, 1)}) are isomorphic for n ≥ 3, we know the winner of REL(ℤn ×ℤ2 ) for
n ≥ 3 by Theorem 2.7. Additionally, we know that Player 2 has a winning strategy for
REL(ℤ2 × ℤ2 ) since its undirected Cayley graph is equal to that of (ℤ4 , {1}). Additional
cases for REL(ℤn × ℤm ) are covered by the following theorem.
Theorem 5.1. Consider the game REL(ℤn × ℤm , {a, b}), where n ≥ m − 1 and n, m ≥ 3.
1. If n ≡ ±1 mod m, then Player 1 has a winning strategy.
2. If n ≡ 0 mod m, then Player 2 has a winning strategy.
3. If m = 4 and n ≡ 2 mod 4, then Player 2 has a winning strategy.
Proof. Let G ≅ ℤn × ℤm with the presentation above. We can write elements from G in
the normal form ai bj , where 0 ≤ i < n and 0 ≤ j < m. Note that the Cayley graph for
G can be visualized as an n-gon of m-gons (see Figure 11.10 for a partial Cayley graph
example).
The strategy is similar in all cases, so we will describe the strategy first in terms
of the players Winner and Loser as opposed to Player 1 and Player 2. The game begins
with Winner completing an initial word w ≡G g for some g ∈ G, where a−1 and b−1 do
194 | Z. Gates and R. Kelvey
not appear in w. Without loss of generality, we assume that a is played before a−1 and
b is played before b−1 . After this, Winner’s strategy is to play a if Loser plays b and to
play b if Loser plays a. Play will continue in this manner unless Loser plays a−1 or b−1 ,
which must occur from the vertex g(ab)ℓ for some ℓ. Note that by Winner’s strategy
the exponent of a is nondecreasing. We show that this strategy is a winning strategy
if Loser plays a−1 or b−1 and hence that we may assume that Loser plays only a and b.
Suppose that Loser plays a−1 . Then Winner could not have played a on the pre-
vious turn since backtracking is disallowed. Hence Winner played b on the previ-
ous turn. By Winner’s strategy this means that Loser must have played a the turn
prior. Then the three most recent vertices visited before Loser plays a−1 are, in order,
g(ab)ℓ−1 , g(aℓ bℓ−1 ), and g(ab)ℓ . Since the exponent of a is nondecreasing, Loser does
not win by playing a−1 to move to g(ab)ℓ a−1 = g(aℓ−1 bℓ ). Then Winner wins by com-
pleting a commutation relator and choosing b−1 to move to g(ab)ℓ−1 . The case where
Loser plays b−1 is similar. Thus we may assume that Loser only plays a and b, and
Winner therefore moves to g(ab)i for all i.
We now consider the following fives cases:
(1) If n = km + 1 for some k, then Player 1 initially plays a so that g = a and then exe-
cutes Winner’s strategy for km turns to win at g(ab)km = a(ab)km = akm+1 bkm = e.
(2) If n = km − 1 for some k, then Player 1 initially plays b so that g = b and executes
Winner’s strategy for km − 1 turns to win at g(ab)km−1 = b(ab)km−1 = akm−1 bkm = e.
(3) If n = km for some k, then Player 2 considers g = e, which they reach on the
0th turn. Then they execute Winner’s strategy for km turns to win at g(ab)km =
e(ab)km = akm bkm = e.
(4) If m = 4 and n = 4k + 2 for some k and Player 1 initially plays a, then Player 2 also
plays a to complete g = a2 and then executes Winner’s strategy for 4k turns to win
at g(ab)4k = a2 (ab)4k = a4k+2 b4k = e.
(5) If m = 4 and n = 4k + 2 for some k and Player 1 initially plays b, then Player 2
plays b to complete g = b2 and executes Winner’s strategy for 4k + 2 turns to win
at g(ab)4k+2 = b2 (ab)4k+2 = a4k+2 b4k+4 = e.
b2 b
3
a a
a3 b2 a3 b ab2 ab
2
a
a2 b2 a2 b
Figure 11.10: A partial Cayley graph for ℤ4 × ℤ3 , a 4-gon of 3-gons. The colored edges give an ex-
ample of Player 1’s strategy as described in Theorem 5.1. The “red” edges denote Player 1 moves,
and the “blue” edges denote Player 2 moves. By choosing the generator opposite to Player 2’s
last move, Player 1 will always ensure victory. Note as well that, in this example, the game word is
aabbab ≡G a3 b3 = a3 , but Player 2 has not achieved a relator.
Silva [12]. The second is the podium rule utilized by Benesh and Gaetz [7] to analyze
q-player DNG, where q is prime.
First, we define the standard podium rule for REL with n players as used by Li and
Nowakowski, Santos, and Silva. In this rule the final player to make a legal move, i. e.,
the player to complete a relator, is the winner. The penultimate player to move finishes
runner-up, the player before that third, and so on. If a player cannot ensure victory for
themselves, then they will assist the player who ensures their highest possible ranking
to win the game.
For three players, this is a bit simpler. The player to complete a relator is the win-
ner. The previous player is second, and the next player to move is third and last. Be-
cause of this podium rule, note that if Player 1 cannot win, then Player 1 will help
Player 2 to win. If Player 2 cannot win, then Player 2 will help Player 3. Finally, if Player 3
cannot win, then Player 3 will help Player 1.
Theorem 6.1. For three-player REL(Dn , {r, s}) with the standard podium rule, Player 1 has
a winning strategy if n ≡ 0 mod 3 or n ≡ 1 mod 3, and Player 2 has a winning strategy
if n ≡ 2 mod 3.
Proof. We will first show that in most cases, a player will choose not to play s. Let
{A, B, C} = {1, 2, 3} such that Player A follows Player C, Player B follows Player A, and
196 | Z. Gates and R. Kelvey
Player C follows Player B. Suppose that the game begins with the word w, where w
does not end in s, and that, without loss of generality, Player C plays r to move to the
word wr. Suppose that Player A does choose s to move to the word wrs. We show that
Player A can finish neither first nor second from this point and hence would never
have chosen s from wr.
If Player A has a winning strategy from this point, then Player B would finish last.
Hence Player B will choose r to move to wrsr, from which Player C will choose s to move
to wrsrs ≡Dn w to win, thus securing a second-place finish for Player B. Thus Player A
cannot win from this situation. Suppose instead that Player A can finish second and
hence that Player B has a winning strategy from the position wrs. Clearly, it cannot
be by choosing r, since this leads to a Player C win. Thus Player B must choose r −1 to
move to wrsr −1 . If Player B has a winning strategy from this point, then Player C will
play s to move to wrsr −1 s, from which Player A will play r −1 to win at wrsr −1 sr −1 ≡Dn wr.
Thus Player B has no winning strategy.
Since Player A and Player B cannot have a winning strategy from wrs, it follows
that Player C has a winning strategy. However, this leads to a last place finish for
Player A, so Player A would never have chosen s.
Now we examine the cases where n ≡ 0, 1, 2 mod 3. First, suppose n ≡ 1 mod 3.
Player 1 has a winning strategy by always choosing r. After Player 1 chooses r on turn
1, we may consider w = e, so that Player 2 is in the situation described above. Hence
Player 2 will not choose s and must choose r. This continues until the game reaches
r n ≡Dn e. Since n ≡ 1 mod 3, Player 1 reaches r n and is thus the winner.
Now suppose n ≡ 0 mod 3. In this case, Player 1 begins by choosing s. Without
loss of generality, Player 2 chooses r to move to sr. If Player 3 chooses s, then Player 1
wins by choosing r to move to srsr ≡Dn e. If Player 3 chooses r, then for all following
turns, we are in the case where the game begins with a word w not ending in s followed
by a choice of r. Therefore all players will choose r until the game reaches sr n ≡Dn s.
Since n ≡ 0 mod 3, Player 1 is the player to reach sr n and hence the winner.
Finally, suppose n ≡ 2 mod 3. If Player 1 chooses s, then without loss of general-
ity, Player 2 will choose r to move to sr. Player 3 must choose r, and each subsequent
turn will result in a choice of r until the game reaches sr n ≡Dn s. Since n ≡ 2 mod 3
and this is a total of n + 1 moves, Player 3 reaches sr n and is the winner. Thus Player 1
finishes last if they choose s on turn 1 and will instead, without loss of generality,
choose r. Again by the argument above, we may assume that all players will play r on
subsequent turns until the game ends at r n ≡Dn e. Since n ≡ 2 mod 3, Player 2 reaches
r n and is the winner.
We now examine another podium rule for REL with n players as used by Benesh
and Gaetz [7]. The first player to complete a relator still wins the game. The ranking
then follows in the opposite manner of the standard podium rule, that is, the following
player is runner-up, the next player is third, and so on. We refer to this as the reverse
podium rule.
Relator games on groups | 197
In the case of three players, this means that if Player 1 cannot win, then Player 1
will help Player 3 to win. If Player 2 cannot win, then Player 2 will help Player 1. Finally,
if Player 3 cannot win, then Player 3 will help Player 2.
Remark 6.2. Note that Remark 3.4 still holds for dihedral groups. A player will never
prefer to be last; if Player m completes the third edge of a square, then Player m + 1
mod 3 wins, and hence Player m finishes last according to this podium rule. Since
finishing last is never preferable, no player will complete a third edge of a square if it
can be avoided. Due to this, Remark 3.5 still holds, that is, a choice of the generator s
forces the following two moves.
Theorem 6.3. For three-player REL(Dn , {r, s}) with the reverse podium rule, Player 1 has
a winning strategy if n is odd, and Player 3 has a winning strategy if n is even.
Proof. The key ingredient to this proof is Remark 6.2. As in the proof of Lemma 3.7, we
assume, without loss of generality, that players move to words equivalent to r i sj with
0 ≤ i ≤ n − 1, 0 ≤ j ≤ 1, where i is nondecreasing. Remark 6.2 then implies that a player
may play s to guarantee that the game moves from r i sj to r i+2 sj+1 with that same player
next to move.
Now suppose n = 2k + 1 for some k. Then by playing s k consecutive times Player 1
ensures that the game moves to r 2k sk with Player 1 moving next. Player 1 then moves
to r 2k+1 sk ∈ {e, s} to win since s has been visited.
Now suppose n = 2k for some k. We note that Player 1 can help Player 3 to win by
always playing s. After k times, the game arrives at r 2k sk = sk ∈ {0, s}, where Player 3
makes the last move to win since Player 2 must have moved from r 2k−2 sk to r 2k−1 sk by
Remark 6.2. This implies that Player 2 cannot have a winning strategy since Player 1
will always prefer Player 3 to win instead of Player 2. To conclude that Player 3 must
win, we now show that Player 1 does not have a winning strategy when n is even and
will therefore help Player 3 to win.
If Player 1 always selects s, then we have shown that Player 3 wins. Thus we may
assume that Player 1 plays r ±1 at some point after playing s ℓ consecutive times for
some 0 ≤ ℓ < k; that is, without loss of generality, the game begins with r 2ℓ+1 sℓ with
Player 2 moving next.
Suppose n ≡ 2 mod 4 or ℓ > 0. We show that Player 2 has a winning strategy.
They can play s k − ℓ − 1 consecutive times to move the game to
where Player 2 moves next. Player 2 then will move to r 2k sk−1 = sk−1 ∈ {0, s}. If ℓ > 0,
then s has been previously visited, so Player 2 wins. If n ≡ 2 mod 4, then k ≡ 1 mod 2,
so k − 1 is even. Hence sk−1 = e, and Player 2 wins at e.
Now suppose n = 4m for some m and ℓ = 0. Then, without loss of generality,
Player 1 plays r on turn 1. We may assume that Player 2 plays s t consecutive times
198 | Z. Gates and R. Kelvey
for some 0 ≤ t < k, resulting in the game moving to r 2t+1 st with Player 2 next to
move. We look at two cases, namely that Player 2 either always plays s or eventually
plays r ±1 . If Player 2 always plays s, then t = k − 1 = 2m − 1, and the game begins with
r 2(2m−1)+1 s2m−1 = r 4m−1 s2m−1 = r n−1 s. From here Player 2 must move to s or to r n−1 . Since
ℓ = 0, s has not been visited, so then Player 3 moves to e to win in either case.
Finally, suppose that Player 2 eventually plays r ±1 , that is, t < k − 1, and the game
begins with r 2t+1 st with Player 2 then moving to r 2t+2 st . Player 3 moves next and has
a winning strategy by playing s 2m − t − 1 times to move to r (2t+2)+2(2m−t−1) st+(2m−t−1) =
r 4m s2m−1 = s, with Player 3 next to move. Since s has not been visited, Player 3 then
moves to e to win.
7 Open questions
– When first devising the games REL and RAV, we wanted to create a combinatorial
game that utilized the Cayley graph of a group. Although the Cayley graph is not
necessary in defining our relator games, we have found it useful when construct-
ing some of our proofs. To that end, we can study the make a cycle and avoid a
cycle games on general graphs. We expect these games to be more challenging
to study due to the absence of properties such as graph regularity and symmetry
inherent in Cayley graphs.
– A fundamental problem in combinatorial game theory for impartial games is to
compute the nim-number of a game (see [15]). These allow us to determine the
outcome of the game as well as of game sums. Although we have determined the
outcome of the games REL and RAV for several families of groups, we leave open
the problem of computing their nim-numbers.
– Another goal is to extend results on REL and RAV to n players. For REL on the di-
hedral groups, this becomes difficult after more than three players are involved,
since a player can only force moves two ahead and thus loses control over their
future moves. Note that the related game DNG for n-players was studied in [7].
– We can of course ask for the outcomes of REL and RAV on other families of finite
groups. Of specific interest are the generalized dihedral groups (see Example 3.11).
We have results for RAV for generalized dihedral groups via Theorem 3.9. For a
finite generalized dihedral group G ≅ H ⋊ ℤ2 , suppose we have a winning strategy
for REL(H, T). Can we then determine a winning strategy for REL(G, S) in a manner
similar to that of Theorem 3.9?
– In computing several game trees while working through examples, we observed
that most games of REL end after traversing at most half the vertices in the Cayley
graph. We have also observed that the game seems to be less complex when more
generators are involved. These lead to interesting questions from a computational
point of view. Can we find winning strategies using a minimal number of moves
Relator games on groups | 199
Bibliography
[1] R. Alvarado, M. Averett, B. Gaines, C. Jackson, M. L. Karker, M. A. Marciniak, F. Su and S.
Walker, The game of cycles, preprint.
[2] M. Anderson and F. Harary, Achievement and avoidance games for generating abelian groups,
Internat. J. Game Theory 16, No. 4 (1987), 321–325.
[3] B. J. Benesh, D. C. Ernst, and N. Sieben, Impartial avoidance and achievement games for
generating symmetric and alternating groups, Int. Electron. J. Algebra 20 (2016), 70–85.
[4] B. J. Benesh, D. C. Ernst, and N. Sieben, Impartial avoidance games for generating finite groups,
North-West. Eur. J. Math. 2 (2016), 83–103.
[5] B. J. Benesh, D. C. Ernst, and N. Sieben, Impartial achievement games for generating
generalized dihedral groups, Australas. J. Combin. 68 (2017), 371–384.
[6] B. J. Benesh, D. C. Ernst, and N. Sieben, Impartial achievement games for generating nilpotent
groups, J. Group Theory 22, No. 3 (2019), 515–527.
[7] B. J. Benesh and M. R. Gaetz, A q-player impartial avoidance game for generating finite groups,
Internat. J. Game Theory 47, No. 2 (2018), 451–461.
[8] D. C. Ernst and N. Sieben, Impartial achievement and avoidance games for generating finite
groups, Internat. J. Game Theory 47, No. 2 (2018), 509–542.
[9] P. Frankl, On a pursuit game on Cayley graphs, Combinatorica 7 (1987), 67–70.
[10] F. Lehner, Firefighting on trees and Cayley graphs, Australas. J. Combin. 75 (2019), 66–72.
[11] S.-Y. R. Li, N-person Nim and N-person Moore’s games, Internat. J. Game Theory 7 (1978),
31–36.
[12] R. Nowakowski, C. Santos, and A. Silva, Three-player nim with podium rule, Internat. J. Game
Theory January (2020), 1–11.
[13] R. Nowakowski and P. Winkler, Vertex-to-vertex pursuit in a graph, Discrete Math. 43, No. 2–3
(1983), 235–239.
[14] A. Quilliot, Jeux et pointes fixes sur les graphes, Thèse de 3ème cycle, Université de Paris VI
(1978), 131–145.
[15] A. N. Siegel, Combinatorial Game Theory, Volume 146 of Graduate Studies in Mathematics,
American Mathematical Society, Providence, RI, 2013.
[16] F. Su, Mathematics for Human Flourishing, Yale University Press, New Haven, CT, 2020.
L. R. Haff
Playing Bynum’s game cautiously
Abstract: Several sequences of infinitesimals are introduced for the purpose of ana-
lyzing a restricted form of Bynum’s game or “Eatcake”. Two of these have terms with
uptimal values (à la Conway and Ryba, the 1970s). All others (eight) are specified by
“uptimal+ forms,” i. e., standard uptimals plus a fractional uptimal. The game itself is
played on an n × m grid of unit squares, and here we describe all followers (submatri-
ces) of the 12 × 12 grid. Positional values of larger grids become intractable. However,
an examination of n × n squares, 2 ≤ n ≤ 21, reveals that all but three of them are
equal to ∗, the exceptions being the 10 × 10, 14 × 14, and 18 × 18 cases. Nonetheless, the
exceptional cases have “star-like” characteristics: they are of the form ±(G), confused
with both zero and up, and less than double-up.
Acknowledgement: In addition to correspondence with Neil A. McKay and the many helpful sugges-
tions provided by a referee, who examined two revisions, the author is grateful to Ruihan Zhuang, a
student assistant, who did computer calculations that verified and augmented those reported earlier
by the author.
L. R. Haff, Department of Mathematics, University of California, San Diego, California, USA, e-mail:
lhaff@ucsd.edu
https://doi.org/10.1515/9783110755411-012
202 | L. R. Haff
Figure 12.1: Top: Possible opening moves from the 6 × 8 starting position. Bottom: A possible move
by Left from the 1 × 8 position.
Eatcake is described and analyzed in On Numbers and Games [3, p. 199–202]; also, [2,
pp. 233–235]. It is an example of a dicotic game, which means that both players can
move from every nonempty subposition of the game. See [8, p. 60]. Every subposition
of a dicotic game is necessarily infinitesimal. In general, a game G is infinitesimal if
−ε < G < ε for every ε > 0.
The starting position for Bynum’s game is an n×m grid of unit squares where n ≥ 1
and m ≥ 1. Any such grid is called a “cake” (with 1 × 1 as a particular case). The rules
are now illustrated by referring to the 6 × 8 starting position shown in Figure 12.1.
If Left is First, then she is required to completely remove any column from the start-
ing position. Looking at Figure 12.1, she has removed the third column, thus splitting
it into separate cakes A and B. For the second move, Right must now choose either A
or B and remove any row. (Left always takes columns; Right always takes rows.) In this
case, Right chooses B and removes the fourth row. Now Left moves in either A, C, or
D, etc. The game ends when no cakes remain, and the winner is the player who eats
the last cake.
Any single row (or column) can appear as a position in Bynum’s game. For ex-
ample, at the bottom of Figure 12.1, we see that Left has moved in the 1 × 8 cake by
taking the fourth square. However, a move by Right (from the starting position) would
eliminate the entire row. This rather obvious position is pointed out only because in-
dividual rows or columns are eliminated as soon as they appear in our version of the
game. (They become zero positions.)
In the above, it is clear that Left had 4 distinct options to start with. She started
by eliminating the third column, but the same result would appear had she eaten the
sixth column instead. Similarly, had Right been First, his removal of the second row
(for example) would have the same meaning as the removal of the fifth row. Conse-
quently, the expression “third column” will refer to the third column from either the
left edge or the right edge. Similarly, “second row” will refer either to the second row
from the top or the second from the bottom, etc.
Our present version of Bynum’s game is played on an n × m grid with n ≥ 2 and
m ≥ 2. Moves are made exactly as in Bynum’s (original) game along with the following
modification: if a single row or column becomes isolated by a move, then it is treated as
Playing Bynum’s game cautiously | 203
a zero position (such leftovers become tainted!). In the above example, suppose Left
had used the first move to take the second column instead of the third. In this case
the first column becomes a zero-position due to its isolation. Likewise, in Figure 12.1,
consider Left’s options for the third move. If she chooses A, then both columns are
effectively removed from play. Finally, consider the n × 3 (or 3 × m) case. At some point
in the game, if Right opts to play in D, for example, then he can move in exactly two
different ways. He can eat the first row (as in Bynum’s game), or he can eat the second
row, in which case D completely disappears because the two remaining rows become
isolated.
We denote an n × m starting position by [n × m]. A follower of [n × m] is any game
position that can appear after a number of moves have been made, and such positions
consist of a number of cakes or components. Again, see Figure 12.1 for an example.
After two moves, the follower consists of components A, C, and D.
First, all values of [2 × m] for m ≥ 2 are uptimals. Values of this kind were intro-
duced by Conway and Ryba in the 1980s. This initial work was unpublished, but is
often cited in books and research articles. In this regard, see McKay [6, pp. 210–214]
and [8, p. 95]. In particular, McKay [6] extended the theory of uptimals and provided
computer code for computing their canonical forms.
In this paper, all values of [n × m] for 2 < n ≤ 12 and 2 < m ≤ 12 are expressed as
“uptimal+ forms”; that is, ordinary uptimals are increased (or decreased) by multiples
of ↑3/2 . Ten sequences of infinitesimals are defined for this purpose, and considerable
effort is devoted to establishing their properties. Notably, CGSuite output is translated
into relatively tractable terms via these sequences.
The followers of [n × n], 12 < n ≤ 21, become prohibitively complex. (The [18 ×
18] case, for example, requires 1,832 pages to print out.) Nevertheless, we are able to
describe the values of all square positions in this range. In this regard, we encounter
a peculiarity that we have been unable to explain: all [n × n] positions, 2 ≤ n ≤ 21, are
equal to ∗ except for n = 10, 14, and 18. However, these exceptional cases have “star-
like” characteristics. These are of the form ±(G), confused with both zero and up, and
less than double-up.
2 Game-theoretic values
The left and right options of [n × m], a position from Bynum’s (modified) game, are
given by
and
where a and b are the numbers of left and right options, respectively,
The game ↑n is called “up-nth,” and the negatives of these games are “down,”
The following (traditional) notation appears throughout: for any game G, we write
G∗ = G + ∗, G↑ = G + ↑, etc.
Uptimals are the first of the dicotic games. (Recall that such games have an option
for both players at every position during the game.) An example of a game that is in-
finitesimal but not dicotic is {0 | {0 | −2}}. In this case, Right has an option in every
nonzero subposition, but Left has no option in −2 = { | −1}.
A number of properties are now stated for future reference. Again, see [6] for a tidy
presentation of proofs.
Lemma 3.1. The sequences ↑[n] and ↑n are positive; also, they are increasing and de-
creasing, respectively.
Lemma 3.3. The games ↑[n] ∗ and ↑n ∗ are confused with 0 (i. e., they are fuzzy).
Lemma 3.4. The games ↑[p] and ↑[q] ∗ are confused for all nonnegative integers p and q.
Playing Bynum’s game cautiously | 205
Theorem 3.5. For each n ≥ 2, we have ↑[n] > m ⋅ ↑n for all positive integers m. (Thus we
say that “↑n is infinitesimal with respect to ↑[n] ”.)
↑[n] = ↑ + ↑2 + ⋅ ⋅ ⋅ + ↑n .
u = d0 ⋅ ∗ + d1 ⋅ ↑ + d2 ⋅ ↑2 + ⋅ ⋅ ⋅ + dn ⋅ ↑n ,
u = .d1 d2 . . . dn ∗,
G : H = {GL , G : H L | GR , G : H R }.
∗ : n = ↑[n] ∗, n = 0, 1, 2, . . . .
Correspondingly, for dyadic rational numbers x ≥ 0, the fractional uptimals are de-
fined by ↑[x] ∗ = (∗ : x) and ↑x+1 = {0 | ↓[x] ∗} along with ↓[x] = −↑[x] and ↓x+1 = −↑x+1 .
The particular cases ↑[1/2] = {0 | ∗, ↑} and ↑3/2 = {0 || 0, ↓∗ | 0, ∗} are of particular
importance in what follows. From Siegel [8, Exercise 4.24, p. 99] we see that
Equation (12.2) motivates most of our present work. Indeed, ↑[1/2] = ↑ + ↓3/2 is the first
instance of an uptimal+ form. In general, these forms are given by
(where di are integers). All positional values in our present game are written in these
terms.
Atomic weight theory provides the following key result: for any infinitesimal G, it
follows that
a. aw(G) ≥ 2 implies G > 0,
b. aw(G) ≤ −2 implies G < 0; otherwise,
c. if −2 ≤ aw(G) ≤ 2, then the unconditional winner is undetermined.
Experts have noted, however, that sharper tools than atomic weight are needed for
extending the theory of all-small games. Here is a simple example: the distinction
between ↑[p] and ↑[q] , p ≠ q, can be important. However, that distinction is lost if
we consider atomic weights only. In particular, we have aw(↑[p] ) = aw(↑[q] ) = 1 for
p, q > 0.
Proof. (a) Clearly, both ↑[1/2] ⊳ 0 and ↑[1/2] ≥ 0, and hence ↑[1/2] > 0. Likewise, it is
clear (for the same reasons) that ↑3/2 > 0. The proof of (b) is also immediate and thus
omitted.
The next two theorems are easily verified by CGSuite [7], and we omit their proofs.
Theorem 3.9. We have (a) ↑[1/2] ∗ = {0, ∗ | 0, ↑∗} and (b) ↑3/2 ∗ = {0, ∗ | ↓[1/2] }.
Theorem 3.10. The following inequalities hold: ↑ > ↑[1/2] > ↑3/2 > ↑2 > 0.
Proof. We prove only (a), as the proof of (b) is similar. The case n = 1 was proven
above. Hence we set 𝒟n = ↑[1/2] + n ⋅ ↓3/2 . Suppose that Left is First in 𝒟n . Then she will
choose ↑[1/2] + (n − 1) ⋅ ↓3/2 + ↑[1/2] + ∗, a game of atomic weight 2, and thus 𝒟n ⊳ 0.
If Right is First in 𝒟n , then his three options are (i) ∗ + n ⋅ ↓3/2 , (ii) ↑ + n ⋅ ↓3/2 , and
(iii) ↑[1/2] + (n − 1) ⋅ ↓3/2 . It follows that (i) || (ii) and (ii) = (iii). Accordingly, we will
L
→ ∗ + (n − 1) ⋅ ↓3/2 + ↑[1/2] ∗ = (n − 1) ⋅ ↓3/2 + ↑[1/2] > 0 (by induction) and
have (i)
L
→ ↑ + (n − 1) ⋅ ↓3/2 + ↑[1/2] ∗ > 0 (a game of atomic weight 2). We conclude that 𝒟n ≥ 0
(ii)
and hence ↑[1/2] > n ⋅ ↑3/2 for all n > 1.
(↑ dominates ↓2 as a left option, whereas ↑[2] ∗ and ↑ are confused; see Lemma 3.4.)
The last equation follows since ↑ reverses to 0, and thus {0, ↑[2] ∗ | 0} = ↑[3] .
We can likewise verify that Tm = ↑[m−3] ∗ for 7 ≤ m ≤ 10. Although the details are
muddied a bit by a quirky term T4 = ↓2 , they pose no difficulties. For m > 10, the terms
“smooth out,” and we get the following result.
Proof. We proceed by induction: for fixed m > 10 and all n such that 10 < n < m, we
assume that Tn = ↑[n−3] ∗. The proof that follows is for the case m = 2k + 1, k ≥ 5. That
for the even case is entirely similar (hence omitted). There are k + 1 left options of Tm :
(1) Tm−1 = ↑[m−4] ∗,
(2) Tm−2 = ↑[m−5] ∗,
(3) Tm−3 + T2 = ↑[m−6] ,
(4) Tm−4 + T3 = ↑[m−7] + ↑,
(5) Tm−5 + T4 = ↑[m−8] ∗ + ↓2 ,
(6) Tm−6 + T5 = ↑[m−9] + ↑[2] ,
(7) Tm−7 + T6 = ↑[m−10] + ↑[3] ,
..
.
(k + 1) Tm−k−1 + Tk = ↑[m−k−4] + ↑[k−3] .
First, options (2), (3), (4), and (5) are dominated as left options. In particular, the
following are true.
– Option (2) is dominated by (1) (see Lemma 3.1).
– Option (3) is dominated by (4), i. e., we have ↑[m−7] + ↑ > ↑[m−6] or, equivalently,
↑1 > ↑m−6 (recall that ↑n is a decreasing sequence).
– Both (4) and (5) are dominated by (6). This follows since
Now we dispatch with the remaining dominated options. The values of (6) through
(k + 1) (inclusive) are all of the form ↑[m−p] + ↑[q] , where p − q = 7 and p = 9, . . . , k + 1.
Now we claim that option (k + 1) dominates (6) through (k), inclusive. In this regard,
we rewrite option (k + 1) as
Now we claim that option (k + 1) reverses to 0. First, saying that options (1) and
(k + 1) are confused is equivalent to saying that
This follows since ↑[2k−3] + ↓[k−3] + ↓[k−3] = 11 ⋅ ⋅ ⋅ 1k−3 11 ⋅ ⋅ ⋅ 12k−3 , and a result from Siegel
[8, p. 96] confirms that the sum is confused with ∗.
Finally, we show that ↑[k−3] + ↑[k−3] reverses to 0. If Left chooses this option, then
Right can only respond with ↑[k−3] ∗ = {0, ↑[k−4] ∗ | 0}. Hence we must show that 𝒟 =
T2k+1 + ↓[k−3] ∗ ≥ 0. If Right is First in 𝒟, then he has three options:
However, Left has winning moves in each case. In (i), she plays in ↓[k−3] ∗ and
chooses 0; in (ii), she chooses ↑[k−3] + ↑[k−3] > 0; and in (iii), she plays in T2k+1 ,
chooses ↑[2k−3] ∗, and gets the overall result ↑[2k−3] ∗ + ↓[k−4] ∗ = ↑[2k−3] + ↓[k−4] > 0. This
completes the proof.
Definition 4.2. A sequence of games Gn is linear if Gn+1 − Gn is constant, that is, the
terms are independent of n for n ≥ 1. Otherwise, such a sequence is nonlinear.
Let us rewrite equation (12.2) as α1 = ↑ − ↑3/2 . (As mentioned earlier, this is the first
example of an uptimal+ form.) Recall that ↑3/2 is infinitesimal with respect to α1 .
Lemma 4.7. Further comparisons follow for the alpha and beta sequences:
(a) for n ≥ 1, we have αn > βn ;
(b) α1 || ∗ and αn > ∗ for n > 1;
(c) α1 < ↑, α2 || ↑, and αn > ↑ for n ≥ 3;
(d) αn+1 || αn for n ≥ 1.
Moreover, statements (b), (c), and (d) hold for the beta sequence as well.
Proof. (a) For n = 1, the inequality follows from equation (12.2) and Lemma 4.6. Sup-
pose n > 1 and set 𝒟 = αn − βn = {0 | αn−1 } + {−βn−1 | 0}. If Left is First in 𝒟, then
L
𝒟 → αn − βn−1 , and Right can reply with either αn−1 − βn−1 > 0 (by induction) or
αn − 0 > 0. Thus 𝒟 ⊳ 0. Otherwise, if Right starts, then either
R L R
𝒟 → αn−1 − βn−1 > 0
→ αn−1 − βn or 𝒟
→ αn + 0 > 0,
The proofs of (b), (c), and (d) for the beta sequence are omitted.
Left will respond to the latter by playing in −αn−1 , and an easy induction will show
that the latter is positive. Consequently, 𝒟n ⊳ 0. Also, we claim that 𝒟n ≥ 0 because
either
R L R R
→ (αn ∗) − αn
𝒟n →0 or 𝒟n
→ αn+1 ∗ > 0 or 𝒟n
→ αn+1 − αn .
In the latter position, Left can win by playing in −αn . It is left for the reader to provide
the details. Thus we have αn+1 ∗ > αn .
Proof. For n = 1, see Theorem 3.9. From definition, αn + ∗ = {∗, αn | αn−1 ∗, αn }. Left’s
option ∗ is dominated by αn (from Lemma 4.7(b)). Also, αn reverses to 0 since
L R
αn ∗ → αn−1
→ αn and αn−1 < αn ∗ (from Theorem 4.8).
Theorem 4.10. The successive differences follow. We have αn −αn−1 = ↑∗ and βn −βn−1 =
↑∗, n ≥ 2.
𝒟2 = α2 − α1 + ↓∗ = {0 | α1 } + {∗, ↓ | 0} + {0 | 0, ∗}.
One readily shows that (i), (ii), and (iii) are all dominated by (iv) as a left option. More-
R
over, Right’s response to (iv) is immediate, that is, (iv) → α1 − α1 + 0 = 0. We conclude
that 𝒟2 ≤ 0.
If Right is First in 𝒟2 , then his options are (i) α1 − α1 + ↓∗, (ii) α2 + 0 + ↓∗, (iii)
α2 − α1 + 0, and (iv) α2 − α1 + ∗. In this case, (ii), (iii), and (iv) are all dominated as right
212 | L. R. Haff
L
options by (i), so we need only consider (i)
→ 0 (where Left plays in ↓∗). Consequently,
𝒟2 ≥ 0, and we conclude that 𝒟2 = 0.
For n > 2, we set 𝒟n = αn − αn−1 + ↓∗ = {0 | αn−1 } + {−αn−2 | 0} + {0 | 0, ∗}. Here
Left’s options are
Now it happens that (iii) dominates both (i) and (ii) as left options. In the first case,
(iii) − (i) = αn + ↑∗ has two Right options, but Left wins both, that is,
L
αn−1 + ↑∗
→ αn−1 + 0 > 0 and αn + 0 > 0.
Additionally, (iii) is equal to (ii) by induction. Thus we need only consider (iii). Since
R
(iii)
→ αn−1 − αn−1 = 0, it follows that 𝒟n ≤ 0.
If Right is First in 𝒟n , then his options are given by
It is left for the reader to show that option (i) dominates each of the other three as a
L
right option. That done, we will have (i)
→ 0 and, consequently, 𝒟n ≥ 0. We have
shown that 𝒟n = 0.
and
n m n
∑(αi − αi−1 ) = ∑(αi − αi−1 ) + ∑ (αi − αi−1 )
i=2 i=2 i=m+1
= (m − 1) ⋅ (↑∗) + (αn − αm ),
Neil A. McKay responded to an earlier version of this work by suggesting the fol-
lowing equation.
(n − 1) ⋅ ↑ + α1 if n is odd,
Corollary 4.12. It follows that αn = {
(n − 1) ⋅ ↑ + α1 ∗ if n is even.
Playing Bynum’s game cautiously | 213
n ⋅ ↑ − ↑3/2 if n is odd,
αn = { 3/2
∗+ n⋅↑−↑ if n is even.
Proof. Let n be an odd positive integer. From Corollary 4.13 we have αn = n⋅↑−↑3/2 < n⋅↑
(since ↑3/2 > 0). If n is even, then
Exactly the same inequalities (as in Corollary 4.14) hold for the β-sequence. More-
over, the same reasoning that led to Corollary 4.13 can be used to provide a similar
statement for the β-sequence (the details are omitted).
(n − 1) ⋅ ↑ + ↑3/2 if n is odd,
βn = {
∗ + (n − 1) ⋅ ↑ + ↑3/2 if n is even.
Proof. (a) The first inequality is obvious. For the latter, set 𝒟n = γna + ∗. For n = 1,
L
→ γ1a + 0 > 0. But if Right starts, then
we have 𝒟1
R L R L R
→ α1 + ∗
[𝒟1 → α1 > 0] or → α2 + ∗
[𝒟1 → α2 > 0] or → γ1a > 0];
[𝒟1
214 | L. R. Haff
R
consequently, γ1a > ∗. For n > 1, 𝒟n
→ γna > 0. Otherwise, we have
R a R
→ γn−1 + ∗ > 0 (by induction)
𝒟n → γna > 0.
or 𝒟n
R a R
→ γn−1 + ↓ > 0 (by induction)
𝒟n → γna + 0 > 0.
or 𝒟n
a a
𝒟n = γn − αn + ∗ = {0 | γn−1 } + {−αn−1 | 0} + {0 | 0}.
First, we claim that 𝒟1 ⊳ 0 where 𝒟1 = {0 | α1 , α2 }+{∗, ↓ | 0}+{0 | 0}. If Left is First, then
she will choose ∗ in −α1 , and the result is γ1a > 0 (from part (a)). Now suppose Right is
L
First. Then his options are (i) α1 − α1 + ∗
→ 0, (ii) α2 − α1 + ∗ > 0 (from Theorem 4.8),
L
(iii) γna + 0 + ∗ > 0 (from part (a)), and (iv) γ1a − α1 + 0
→ γ1a + ∗ > 0. Hence 𝒟1 ≥ 0, and
a
the foregoing shows that γ1 ∗ > α1 .
If Left is First in 𝒟n (n ≥ 2), then she will choose γna − αn−1 + ∗, from which Right
has three options:
a
(i) γn−1 − αn−1 + ∗ > 0 (by induction), (ii) γna + 0 + ∗ > 0 (from part (a)), and
(iii) γna − αn .
Left will respond to (iii) with γna − αn−1 . By induction it follows that
R L
γna − αn−1 a
→ γn−1 a
− αn−1 = (γn−1 ∗) − αn−1 + ∗ a
→ (γn−1 ∗) − αn−1 > 0 or
R
γna − αn−1
→ γna + 0 > 0.
R a L a
→ γn−1 − αn + ∗
𝒟n → γn−1 − αn−1 + ∗ > 0 (by induction), or
R a
→ γn + 0 + ∗ > 0 (by part (a)),
𝒟n or
Playing Bynum’s game cautiously | 215
R a L a
→ γn − αn + 0
𝒟n → γn − αn−1 .
It was previously seen that the last position is a second player win. We now have
𝒟n ≥ 0 and thus 𝒟n > 0. We conclude that γna ∗ > αn .
But (i) and (ii) are positive (Lemma 4.17(a, c)). Also, (iii) is positive since γ2a − α1 >
(α2 ∗) − α1 > 0 by Lemma 4.17(c) and the fact that αn is virtually increasing. It follows
that 𝒟2 ⊳ 0. Now suppose Right is First in 𝒟2 . Then his options are
Left clearly wins (i), and from Lemma 4.17(a) it follows that (iii) dominates (ii) as a Right
option. It is left for the reader to show that Left can win (iii). This done, we will have
𝒟2 ≥ 0 and, consequently, γ2a > γ1a ∗.
Now we claim that γna > γn−1 a
∗ for n ≥ 3. If Left is First in 𝒟n , then her winning
a a
move is to γn − γn−2 + ∗ since Right’s options from the latter are
a
(i) γn−1 −γ an−2 + ∗, (ii) γna + 0 + ∗, and (iii) γna −γ an−2 + 0,
none of which will provide him with a winning move. Again, (iii) dominates (ii) as a
right option. Left wins (i) by induction, and following (iii), Left will respond with
a a a a a a
γn−1 − γn−3 = (γn−1 + γn−2 ∗) + (γn−2 ∗ −γn−3 )>0 (again by induction).
Thus we have 𝒟n ⊳ 0. Now let Right move first in 𝒟n . Then he has three options:
R a a R R
(i) 𝒟n
→ γn−1 + γn−1 + ∗, → γna + 0 + ∗,
(ii) 𝒟n and → γna + γn−1
(iii) 𝒟n a
+ 0.
Here (ii) dominates (iii) as a right option, option (i) is fuzzy, and option (ii) is positive
by Lemma 4.17(a). Hence 𝒟n ≥ 0, and we conclude that γna > γn−1 a
∗ (n ≥ 3).
Proof. By definition, γ1a + ∗ = {∗, γ1a | γ1a , α1 ∗, α2 ∗}. First, γ1a is dominated by α1 ∗ as a
right option (by Lemma 4.17(c)). It is also the case that the left options ∗ and γ1a reverse
to 0 (the details are omitted).
216 | L. R. Haff
γ1a = 2 ⋅ α1 + ∗ = 2 ⋅ ↑ + 2 ⋅ ↓3/2 + ∗.
a a a
𝒟2 = γ2 − γ1 + ↓∗ = {0 | γ1 } + {−α1 , −α2 | 0} + {0 | 0, ∗}.
(i) 0 − γ1a + ↓∗, (ii) γ2a − α1 + ↓∗, (iii) γ2a − α2 + ↓∗, and (iv) γ2a − γ1a + 0.
However, option (iv) dominates the first three of these as a left option.
To begin with, (iv) − (i) = γ2a + ↑∗ ≥ 0 because
R L R
(γ2a + ↑∗)
→ (γ1a + ↑∗)
→ (γ1a + 0) > 0 or (γ2a + ↑∗)
→ (γ2a + 0) > 0,
(iv) − (ii) = −γ1a + α1 + ↑∗ = α1 + ↑ > 0 (by Theorem 3.10 and Lemma 4.20), or
(iv) − (iii) = −γ1a + α2 + ↑∗ = α2 − 2 ⋅ α1 + ↑
= (α2 − α1 ) − (α1 + ↓) = ↑∗ + β1 (from Theorem 4.10)
= (β2 − β1 ) + β1 = β2 > 0.
R
Thus we need only consider (iv) γ2a − γ1a → γ1a − γ1a = 0. This proves that 𝒟2 ≤ 0. If Right
is First in 𝒟2 , then he also has four options:
(i) γ1a − γ1a + ↓∗, (ii) γ2a + 0 + ↓∗, (iii) γ2a − γ1a + 0, and (iv) γ2a − γ1a + ∗.
In this case, it is left for the reader to show that option (i) is the single, dominant right
L
option. This done, we will have (i) ↓∗
→ 0, so that 𝒟2 ≥ 0. This establishes that 𝒟2 = 0.
In general, Left’s options for n ≥ 3 are
a
(i) 0 − γn−1 + ↓∗, (ii) γna − γn−2
a
+ ↓∗, and (iii) γna − γn−1
a
.
Here we claim that (ii) dominates both (i) and (iii) as a left option since
Playing Bynum’s game cautiously | 217
a a
(ii) − (i) = (γn−1 − γn−2 ∗) + γna ∗ > 0 (by induction)
a a
(ii) − (iii) = γn−1 − γn−2 + ↓∗ = 0 (by induction).
R
Finally, (ii) γna − γn−2
a a
→ γn−1
+ ↓∗ a
− γn−2 + ↓∗ = 0 and thus 𝒟n ≤ 0. In the proof
that 𝒟n ≥ 0, there are four right options among which ↓∗ is dominant (the details are
L
→ 0, we declare that γna − γn−1
omitted). Since ↓∗ a
= ↑∗ for n ≥ 2.
∗ + (n + 1) ⋅ ↑ − 2 ⋅ ↑3/2 if n is odd,
γna = { 3/2
(n + 1) ⋅ ↑ − 2 ⋅ ↑ if n is even.
Thus we see that aw(γna ) = n + 1. Now, stated without proof, we have the following:
The three sequences that we have seen thus far are related by the following corol-
lary. (Again, the details are omitted.)
b a b a
𝒟n = γn − γn ∗ = {0 | γn−1 } + {−γ n−1 ∗ | 0};
so, in particular, 𝒟1 = {0 | γ1a , γ2a } + {−α1 ∗, −α2 ∗ | 0}. For n = 1, Left’s winning is
L a a
→ {0 | γ1 , γ2 } + {0, ↓∗ | 0, ∗}.
𝒟1 (Recall that −α1 ∗ = {0, ↓∗ | 0, ∗}.)
218 | L. R. Haff
Left wins the first two of these by choosing 0 in −α1 ∗, and (iii) is positive from part (a).
Additionally, Left wins (iv) by playing in ∗. Hence 𝒟1 ⊳ 0.
If Right is First in 𝒟1 , then his options are (i) γ1a − γ1a ∗, (ii) γ2a − γ1a ∗, and (iii) γ1b + 0.
Left obviously wins (i) and (iii), and he wins (ii), since γna is virtually increasing. Thus
we have 𝒟1 ≥ 0, and hence γ1b > γ1a ∗.
L
→ γnb − γn−1
If Left moves first in 𝒟n , then we will have 𝒟n a
∗. Now Right can reply
b a b
with γn−1 − γn−1 ∗ > 0 (by induction) or γn > 0. Hence 𝒟n ⊳ 0. Otherwise, if Right is
First, then we can have
R b a L b a R
→ γn−1 − γn ∗
𝒟n → γn−1 − γn−1 ∗ = 0 (by induction) → γnb > 0.
or 𝒟n
b b b a a
𝒟2 = γ2 − γ1 + ∗ = {0 | γ1 } + {−γ1 , −γ2 | 0} + ∗.
First, 𝒟2 ⊳ 0 because Left will choose γ2b − γ2a + ∗ > 0 (where the inequality follows
from Lemma 4.26(c)). Otherwise, if Right is First, then his options are
L
(i) γ1b − γ1b + ∗
→ 0, (ii) γ2b − 0 + ∗ > 0, and (iii) γ2b − γ1b ,
where Left is seen to have a winning move in (iii) (following two more moves). Thus
𝒟2 ≥ 0 and, consequently, γ2b > γ1b ∗. Suppose that n ≥ 3. In this case, Left’s winning
move is given by
L b b b b b b
→ γn − γn−2 + ∗ = (γn − γn−1 ) + (γn−1 − γn−2 ∗),
𝒟n
b a a a
𝒟 = γ1 − γ1 − α1 ∗ = {0 | γ1 , γ2 } + {−α1 , −α2 | 0} + {0, ↓∗ | 0, ∗}
and show that 𝒟 = 0. If Left is First, then she will not play in γ1b because if she does,
then Right will choose 0 in −α1 ∗. Thus we consider Left’s other options:
(i) γ1b − α1 − α1 ∗, (ii) γ1b − α2 − α1 ∗, (iii) γ1b − γ1a + 0, and (iv) γ1b − γ1a + ↓∗.
Here (i) is confused with (ii) from Lemma 4.7(d). Also, (i) is equal to (iii) from Lemma
4.20, and (ii) is equal to (iv) due to Lemma 4.20. Thus we need only consider
R
→ γ1a − α1 − α1 ∗ = 0 (by Lemma 4.20) and
(i)
R
→ γ2a − α2 − α1 ∗ = 0 (by Corollary 4.13 and Corollary 4.22).
(ii)
Thus 𝒟 ≤ 0. Now suppose Right is First. Then his options in 𝒟 are as follows:
In this case, (i) clearly dominates (iii) as a right option. Additionally, (i) dominates
both (iv) and (v) as right options. In particular,
L L
(i)
→0 and → γ2a − α2 − α1 ∗ = 0
(ii) (by Corollaries 4.12 and 4.22).
Now we have 𝒟 ≥ 0, and hence γ1b = γ1a + α1 ∗. The second equation in the lemma
follows from substitution by using previously established uptimal+ forms.
It is fairly obvious that the method we used to obtain the first differences, uptimal+
forms, and so on for all previous sequences can be used here as well. Consequently,
further properties of the γ b -sequence and of the six sequences defined further are all
stated without proof.
(n + 2) ⋅ ↑ − 3 ⋅ ↑3/2 if n is odd,
γnb = { 3/2
∗ + (n + 2) ⋅ ↑ − 3 ⋅ ↑ if n is even.
Hence aw(γnb ) = n + 2.
γ1c = γ2a + ↓2 ∗.
In this case, aw(γnc ) = 3 for all n. Next, we have another strictly decreasing se-
quence.
δ1 = α1 + ↓2 = ↑ + ↓2 − ↑3/2 .
Hence aw(δn ) = 1, n ≥ 1.
The first term ε1a was given earlier by Theorem 3.9(b). We restate it below (in
Lemma 4.46) because it appears in the uptimal+ form.
ε1a = β1 ∗ = ↑3/2 ∗.
Lemma 4.51. Inequalities (a), (b), and (c) in Lemma 4.45 are also valid for θna with the
exception that 2 ⋅ θna > ↑. In addition, θna > εm
a
for all n ≠ m.
θ1a = ∗ + α1 − ↑3/2 .
Lemma 4.57. Inequalities (a), (b), and (c) in Lemma 4.45 are also valid for εnb . In addi-
tion, εnb < εm
a
for all n ≠ m.
εnb = ∗ + ↓ + ↑[n+2] .
Lemma 4.62. Inequalities (a), (b), and (c) in Lemma 4.45 are also valid for θnb with the
exception that 2 ⋅ θnb > ↑. In addition, θnb > θna for all n ≠ m.
θ1b = ∗ + δ1 + ε = ∗ + ↑ + ↓2 .
θnb = ∗ + 2 ⋅ ↑ + ↓[n+1] .
2×2 ∗ 0 𝒩
2×3 ↑∗ 1 𝒩
2×4 ↓2 0 ℛ
2×5 ↑[2] ∗ 1 𝒩
2×6 ↑[3] ∗ 1 𝒩
2×7 ↑[4] ∗ 1 𝒩
2×8 ↑[5] ∗ 1 𝒩
2×9 ↑[6] ∗ 1 𝒩
2 × 10 ↑[7] ∗ 1 𝒩
2 × 11 ↑[8] ∗ 1 𝒩
2 × 12 ↑[9] ∗ 1 𝒩
3×3 ∗ 0 𝒩
3×4 ↓ −1 ℛ
3×5 −α1 −1 ℛ
3×6 −α2 −2 ℛ
3×7 ∗ 0 𝒩
3×8 ↓ −1 ℛ
3×9 −α1 −1 ℛ
3 × 10 −α2 −2 ℛ
3 × 11 ∗ 0 𝒩
3 × 12 ↓ −1 ℛ
224 | L. R. Haff
4×4 ∗ 0 𝒩
4×5 ε1a 0 𝒩
4×6 {∗, ↑[2] | −α2 } −1/2 𝒩
4×7 {⇑ | ∗} 1 ℒ
4×8 {0, ↑∗ | ↓} 0 𝒩
4×9 {β2 | −α1 } 0 𝒩
4 × 10 {⇑ | ↑ ∗ || ↑2 ∗ ||| −α2 } −1/2 𝒩
4 × 11 {3 ⋅ ↑ | ↑ ∗ || ∗} 1 ℒ
4 × 12 {0, {⇑∗ | 0} | ↓} 0 𝒩
5×5 ∗ 0 𝒩
5×6 ↓ −1 ℛ
5×7 {γ1b ∗ | ∗} 1 ℒ
5×8 {0, {γ1a ∗ | 0, α1 ∗} | ↓} 0 ℛ
5×9 {{0, α1 ∗ | −α1 } | 0} 0 ℛ
5 × 10 {∗, {α1 ∗ | 0} | −α2 } −1 ℛ
5 × 11 {γ1b | α1 ∗ || ∗} 1 ℒ
5 × 12 {0, {γ1b | α1 , γ1a || α1 ∗ | 0} | ↓} 0 𝒩
6×6 ∗ 0 𝒩
6×7 {γ3a ∗ | ↑[4] } 5/2 ℒ
6×8 see below 1 ℒ
6×9 {α3 ∗ | ε6a } 3/2 ℒ
6 × 10 {α2 ∗ | −δ6 } 1/2 𝒩
6 × 11 see below 11/4 ℒ
6 × 12 see below 3/2 ℒ
Along with each value, the atomic weight and outcome class is stated. Recall that every
combinatorial game G is found in exactly one of four outcome classes:
Example 4.65. Figure 12.2 shows the board position following 10 moves from a 13 × 16
starting position. Five components remain, and their corresponding atomic weights
are given. Left moved first, so, again, its Left’s turn.
Playing Bynum’s game cautiously | 225
{{0 | ∗, ↑ || 0, {0 | ∗, ↑ || 0, {0 | ∗, ↑ || 0, {0 | ∗, ↑ || 0, {0 | ∗, ↑ || 0, {0 | ∗, ↑ || 0, ∗}}}}}}}.
By comparison with the previous expression, the information provided by its trans-
lation, [9 × 6]L = α1 + ↓[6] ∗, is immediately available. Indeed, such translations are
generally more tractable. For large positions, however, they are not readily obtained
by hand.
226 | L. R. Haff
The option [9 × 6]L specifies an overall position SL given by the sum of the follow-
ing:
A = ∗ + ↑[5]
B = ↓
L
C = ↑ + ↓[6] − ↑3/2
D = ↑ − ↑3/2
E = ↑ − ↑3/2 .
Example 4.66. This example shows a portion of Table 12.2 for n = 5. Here the term
φ = {α1 | −α1 ∗} is used to achieve further simplification. In regards to φ, we find that
aw(φ) = 0, φ > 0, φ || ∗, and φ < ↑.
5×7 {γ1b ∗ | ∗} 1 ℒ
5×8 {0, {γ1a ∗ | 0, γ1a ∗}} 0 ℛ
5×9 {{0, α1 ∗ | −α1 } | 0} 0 ℛ
5 × 10 {∗, {α1 ∗ | 0} | −α2 } −1 ℒ
Upon translation of the entries in Table 12.3, we obtain certain “extensions of extended
uptimals,” namely, [5 × 7] = (↑ − ↑3/2 ) + φ, [5 × 8] = (∗ − ↑3/2 ) + φ, [5 × 9] = ∗ + φ, and
[5 × 10] = ↑ + φ.
[10 × 10] = ±({{0 ||| 0 || 0 | {0, ∗ | 0, ↑∗}, {0 || 0, ∗ | 0, ↑∗} |||| ⇑∗, {0 | {0, ∗ | 0, ↑∗},
{0 || 0, ∗ | 0, ↑∗}}} ||| {0 ||| 0 || 0, ∗ | 0, ↑∗ |||| {0, ∗ | 0, ↑∗}, {0 || 0, ∗ | 0, ↑∗}} | 0,
Playing Bynum’s game cautiously | 227
It does happen that [10 × 10] has “star-like” characteristics; namely, it is confused
with 0, ∗, and ↑ and is less than 2 ⋅ ↑. We mention in passing that this position is
conspicuous in at least one other way. Namely, [10×10] has four winning moves (so two
of them are necessarily dominated options). Only the fifth column (row) happens to be
a losing move for Left (Right). The winning moves for Left are illustrated in Figure 12.4.
Figure 12.4: Left’s (4) winning moves from [10 × 10] (depicted by grey bars).
On the other hand, all other [n × n] positions for 2 ≤ n ≤ 12 admit exactly one winning
move.
For square positions [n × n], n ≥ 12, our purpose was to simply evaluate the square
positions. With the [10 × 10] case in mind, we were looking for further instances in
which such positions were unequal to star. No further attempt was made to analyze
228 | L. R. Haff
the followers of these positions. (The [18 × 18] case alone would require 1,832 pages of
MSWord to print out!)
Similarly to the [10 × 10] case, the values of [14 × 14] and [18 × 18] are also unequal
to star; moreover, they have the same star-like characteristics. Is there a pattern here?
It is yet to be determined whether or not [22 × 22], [26 × 26], . . . are also unequal to star.
Bibliography
[1] M. H. Albert, R. J. Nowakowski, and D. Wolfe, Lessons in Play, An Introduction to Combinatorial
Game Theory, 2nd edition. CRC Press, Boca Raton, FL.
[2] E. R. Berlekamp, J. H. Conway, and R. K. Guy, Winning Ways, 2nd edition. A. K. Peters, Ltd.,
Wellesley, MA, 2001.
[3] J. H. Conway, On Numbers and Games, 2nd edition. A. K. Peters, Ltd., Natick, MA, 2001.
[4] L. R. Haff and W. J. Garner, An Introduction to Combinatorial Game Theory. Lulu Press, 2018.
[5] https://www.math.ucsd.edu/~haff/.
[6] N. A. McKay, Canonical forms of uptimals, Theoret. Comput. Sci., 412(2011): 7122–7132, 2011.
[7] A. N. Siegel, CGSuite. Combinatorial games suite computer program, 2004.
[8] A. N. Siegel, Combinatorial Game Theory, Graduate Studies in Mathematics, Vol. 146. American
Mathematical Society, Providence, RI, 2013.
Melissa A. Huggan and Craig Tennenhouse
Genetically modified games
Abstract: Genetic programming is the practice of evolving formulas using crossover
and mutation of genes representing functional operations. Motivated by genetic evo-
lution, we introduce and solve two combinatorial games, and we demonstrate some
advantages and pitfalls of using genetic programming to investigate Grundy values.
We conclude by investigating a combinatorial game whose ruleset and starting posi-
tions are inspired by genetic structures.
1 Introduction
The fundamental unit of biological evolution is a gene, which represents a small piece
of information, and the genome is a collection of genes that encodes an organism’s
complete genetic information. Within the context of biological evolution, the genes
of the most fit organisms survive and are passed on to the next generation, with their
chromosomes modifying over time to better fit their environment through compe-
tition. This modification occurs through the processes of mutation and crossover,
wherein individual genes are altered and pairs of chromosomes trade information,
respectively, as organisms pass down their genetic information to their progeny (see
Figure 13.1).
Figure 13.1: A pair of chromosomes undergoing crossover and then mutation. In general, as exempli-
fied by a grey gene, this process is not restricted to two values.
This set of mechanisms in biological evolution has been coopted as a model for al-
gorithmic development of heuristic solutions to a variety of problems, like antenna
Acknowledgement: Melissa A. Huggan was supported by the Natural Sciences and Engineering Re-
search Council of Canada (funding reference number PDF-532564-2019).
https://doi.org/10.1515/9783110755411-013
230 | M. A. Huggan and C. Tennenhouse
design [14], the traveling salesman problem [7], and graph coloring [10]. In these prob-
lems a chromosome encodes information about the structure and properties of work-
ing solutions. These solutions are the results of genetic algorithms. When the chromo-
some instead represents a function or program, the process is called genetic program-
ming. Genetic programming is often used when a user has a collection of data points
and is looking for a function to fit them. The fitness of a particular program is there-
fore related to the error between the data points and the program. This mechanism
is similar to that of regression in statistical methods (Figure 13.2). Given a set of data
points, both tools are used to minimize the error between these data points and the as-
sociated function values. Statistical regression requires a predefined model in which
the coefficients are optimized. In genetic programming the function itself is iteratively
adjusted, via specified operations called crossover and mutation, using a pool of el-
ementary functions and constants. No predefined model is necessary. If the resulting
function better fits the data, then the change is likely to be accepted, and the process
continues. The process stops evolving based on a predetermined number of iterations
or when a particular fitness threshold is reached.
Figure 13.2: A curve with poor fit to data points (left) and another with much better fit (right).
paper focuses on the outcome class classification problem instead of Grundy values
and restricts their investigation to nim. However, the project and its success serve as a
strong motivator for the application of genetic programming to combinatorial games,
and we hope we have done it justice in extending their results and adding to the body
of work combining these two mathematical endeavors. An automated conjecturing
tool was examined in [6] within the context of the game chomp, also focused on out-
come classes.
Recall that the Grundy value of an impartial game position is the smallest non-
negative integer not included in the Grundy values of its options [19, 11]. For more
information about combinatorial game theory, see [1, 2, 3, 4, 5, 8]. In this project, we
generate data points of the form (H, g), where H ∈ ℤn is a list of integers representing
a game position, and g ∈ ℤ is the associated Grundy value. A key challenge of this
project is the fact that heuristics are not often useful for calculating Grundy values.
In fact, either a function completely determines the value of a game, or it is incorrect.
This leaves us with the difficult task of devising a fitness function that represents dis-
tance, not a natural concept in the space of impartial game values and at the same
time leads to eventual convergence with an error of zero.
The paper proceeds as follows. We first give necessary background for genetic pro-
gramming, framed within the context of the Python package gpLearn and the method-
ology for the conducted experiments. We then introduce, in sequence, three impartial
rulesets motivated by genetic structures. The first two games use simplified structures
to allow for computational analysis. In Section 2, we utilize gpLearn as a tool to help
conjecture a pattern within the sequence of game values. This then points us in the
direction of the known game kayles, to which we prove its equivalence. Next, in Sec-
tion 3, we introduce a two-point crossover game and again implement our computa-
tional methods to obtain conjectured formulas for Grundy values based on the num-
ber of heaps. Although this alone does not prove the generalized formula, the program
provides a direction for the structure of a general formula, which we prove in Theo-
rem 3.2. Lastly, we examine a game whose ruleset is inspired by more typical genetic
changes and includes both crossover and mutation moves. In Theorem 4.3, we prove
its equivalence to a subset of arc kayles positions, and in Theorem 4.5, we prove their
game values. We conclude with future directions.
node is therefore recursively associated with a single function on the set of primitives
(see Figure 13.3).
ln(2+x)
Figure 13.3: A chromosome in tree format representing the function f (x, y) = x(0−y)
.
(a1 , . . . , ak ) and (b1 , . . . , bk ) are swapped, leading to the new bit strings
After crossover, there is a possible mutation, depending on the chosen mutation rate,
turning, say, B′1 into B′′
1 = (b1 , . . . , bi−1 , 1 − bi , bi+1 , . . . , bk , ak+1 , . . . , an ).
Motivated by these processes, we define a new impartial combinatorial game ga1.
To simplify both rules and analysis, we define a position as a single bit string. A muta-
tion move flips a single bit in the string, and since there is no real crossover in a single
string, we consider the flip of a sequence of bits to be representative of this operation.
Ruleset 2.1 (ga1). A position in ga1 is a bit string of length n. There are two move op-
tions. Crossover consists of choosing an integer k, 1 ≤ k ≤ n − 1, wherein all bits from
position 1 to k are flipped. A mutation move is simply the flip of any single bit in the
string. A move is legal only if the total number of substrings of the form 01 and 10
increases.
This latter restriction, that “disorder” increases, serves two purposes. Firstly, it
ensures that the game ends in a finite number of moves. Secondly, it represents the
tendency of chromosomes to combine in ever more complex ways over time. We define
the condition of increasing substrings 01 and 10 formally as follows.
Definition 2.2. The entropy of a bit string game is the number of substrings of the form
01 and 10.
The game ga1 is equivalent to a heap game in the following way. If we consider a
run in a bit string to be a maximal substring consisting of all 0s or all 1s, then any bit
string can be converted into a list of integers representing run sizes. For example, the
string 001000111 becomes (2, 1, 3, 3). Although this representation loses information
about which bits are associated with each integer, the symmetry of the ruleset makes
this lost information unnecessary to the game analysis. We can simplify the ruleset
further by Proposition 2.3.
Proposition 2.3. If H = (h1 , . . . , hk ) is a list of heaps representing a bit string in ga1, then
the following hold.
(1) Any heap equal to 1 can be removed.
(2) The order of the heaps does not affect the Grundy value.
(3) Each move is equivalent to one of the following.
(a) Split any heap hi > 3 into two heaps of size at least 2 each.
(b) Remove 1 from any heap hi ≥ 3.
(c) Remove 1 from any heap hi ≥ 5 and split the remainder into two heaps of at
least 2 each.
(d) Remove any heap of size 2 or 3.
234 | M. A. Huggan and C. Tennenhouse
As a direct result of Proposition 2.3, we need only consider single heap positions,
since the Grundy value of a list of heaps is equal to the nim-sum of the Grundy values
of the individual heaps.
The package gpLearn was employed as described in Section 1.1. In particular,
game values were computed using positions of the form (h1 , h2 , . . . , hn ) with varying
values for the number of heaps n and each heap hi . These solutions were set as ground
truth for the genetic programming implementation. The system generated random
chromosomes, each associated with a function from ℕn to ℕ, and used crossover and
mutation to progressively reduce the error between the values of these functions and
the ground truth values. Although no exact formula was found, after 14 generations,
a local minimum on the total absolute error was reached. Modifying hyperparameters
and running for other 7 generations led to the formula
where h is the size of a single heap and MOD(x,n) represents x (mod n). While not a
particularly accurate formula, we do see the presence of both modulo 3 and modulo 4.
This leads us to examine the exact computed Grundy values closely for periodicity of
order twelve and find a striking similarity with the values of the combinatorial game
kayles.
Ruleset 2.4 (kayles [9]). In kayles a player may remove one or two stones from any
heap, and if any stones remain, then these may be split into two heaps.
Genetically modified games | 235
kayles has octal code 0.77 [8] and has been well studied. In particular, it is known
that the Grundy values for a single heap game of kayles of size n is periodic with
period 12 after n = 71 [12].
Theorem 2.5. The Grundy value of a single heap game of size n in ga1 is equal to the
value of a heap of size (n − 1) in kayles.
Proof. This is easy to compute for n ≤ 3. If n ≥ 4, then the options are {(j, n − j) : 2 ≤ j ≤
(n−j)}∪{(k, n−k−1) : 2 ≤ k ≤ n−k−1}∪{(n−1), (n−2)}. The options for an (n−1)-sized heap
in kayles are {(j, n−j−2) : 1 ≤ j ≤ (n−j−2)}∪{(k, n−k−3) : 1 ≤ k ≤ n−k−3}∪{(n−2), (n−3)}.
We can therefore consider a move in ga1 to be equivalent to the following process:
1. Remove a stone from a heap,
2. Make a kayles move in the resulting heap of size (n − 1), and
3. Add a stone back to all resulting heaps.
Therefore the game ga1 reduces to a game of kayles, and thus the Grundy values are
computable in the same manner as those for kayles.
Although our genetic programming method did not determine a precise error-free
formula for the Grundy values of ga1, it did inform our understanding of the game by
pointing us to its periodic nature.
If 1 ≤ x < y ≤ n are integers, then the two-point crossover using positions x and y
results in the bit strings
and
that is, a substring with matching indices from each bit string is swapped. We wish
to define an impartial game motivated by two-point crossover as a move mechanic.
As we did with ga1, we play only in a single bit string. This also means that defin-
ing mutation-type moves is redundant since any such move would be equivalent to
crossover with x = y − 1.
236 | M. A. Huggan and C. Tennenhouse
Ruleset 3.1 (ga2). A position in ga2 is a bit string of length n. On their turn a player
chooses two integers x, y ∈ [1, n], x < y, wherein all bits from position x to (y − 1) are
flipped. A move is legal only if the total number of substrings of the form 01 and 10
increases.
As with ga1, we can reduce ga2 to a game on heaps. Note that, again, a run of bits
can be represented by an integer. A legal move requires that at least one of {x, y} is
chosen within a run. The possible options are as follows:
1. Both x and y are within the bounds of a single run, equivalent to splitting a single
heap into three heaps.
2. x and y are each within the bounds of different runs, equivalent to splitting any
two heaps into two.
3. x and y are chosen so that exactly one single heap is split into two.
Just as with Proposition 2.3, we see that heaps of size 1 are negligible, as is the order
of the heaps. However, since players can alter multiple heaps in a single move, we
cannot compute the Grundy value by simply computing the nim-sum of the Grundy
values of single heap games.
As in Section 2, we applied gpLearn with the modified function list to computa-
tionally determined Grundy values, without first examining these values. Once the
number of nonzero heaps was included as a primitive value in games with more than
two heaps (e. g., (3, h1 , h2 , h3 ) is a three-heap game, whereas (4, h1 , h2 , h3 , h4 ) represents
a four-heap position), genetic programming proved much more successful, yielding
the formulas below with 100 % accuracy:
1. For a single-heap game with heap size h,
MOD(SUB(h,1),PLUS1(PLUS1(1))),
which is equivalent to (h − 1) (mod 3).
2. With two heaps h1 , h2 ,
MOD(PLUS1(SUB(ADD(h1 , h2 ), XOR(h1 , h1 ))), PLUS1(PLUS1(EQUAL(h1 , h1 )))),
which is equivalent to (h1 + h2 + 1) (mod 3).
3. For a three-heap game with inputs 3, h1 , h2 , h3 , we found
MOD(ADD(ADD(h3 , h1 ), h2 ), ADD(3, SUB(0, 0))),
which reduces to (h1 + h2 + h3 ) (mod 3).
Although these results themselves do not provide a generalized formula, they do gen-
eralize easily to the following:
Theorem 3.2. Let H = (h1 , . . . , hn ) be an n-heap position in ga2, and let t be the smallest
nonnegative integer such that (n + t) ≡ 0 (mod 3). Then the Grundy value of H is (t +
∑ni=1 hi ) (mod 3).
Genetically modified games | 237
Proof. Note first that although we can eliminate heaps of size 1 in our analysis of ga2
just as we did in Proposition 2.3 for ga1, we are not compelled to do so. In fact, not
removing them makes for a simpler analysis here.
In the case of a single stone, it is clear that the Grundy value is 0 as no moves are
possible. It is also easy to see that the claim holds when all heaps have size 1 except
possibly a single heap of size 2, so we need only consider the remaining cases. We
proceed now by minimum counterexample. Assuming that the claim is false, let m be
the smallest integer such that not all games on m-many stones follow the statement
of the theorem. Among all such games with m stones, let H = (h1 , . . . , hj ) be a position
with the greatest number of heaps.
For a positive integer x, let {x1 , x2 } and {x1 , x2 , x3 } represent all sets of positive inte-
gers satisfying x1 + x2 = x and x1 + x2 + x3 = x. For any i, k with 1 ≤ i < k ≤ j, the options
of H are H \ {hi } ∪ {hi1 , hi2 }, H \ {hi } ∪ {h1i , h2i , h3i }, and H \ {hi , hk } ∪ {hi1 , hi2 , hk1 , hk2 }, i. e.,
all positions in which any one heap hi of sufficient size is removed and replaced with
two or three heaps whose sum is hi , and those in which any two heaps hi , hk ≥ 2, are
removed and each replaced with two heaps whose sums are hi , hk , respectively.
All options contain m total stones since we have not removed any. Further, every
option has more heaps than H has, and therefore by the choice of minimal counterex-
ample, adhere to the statement of the claim. Thus their Grundy values are all equal
to (m + t − 1) (mod 3) and (m + t − 2) (mod 3). If there is a heap of size at least three,
then it can be split into two heaps via a crossover move or three heaps via mutation
(changing only one bit). Similarly, if there are two heaps of size at least two, then these
can become four heaps through crossover or three heaps through mutation. Therefore
H has options with one more and two more heaps, and hence both values (m + t − 1)
(mod 3) and (m + t − 2) (mod 3), respectively, are present among the options. Thus
H must have the Grundy value (m + t) (mod 3), contradicting the choice of H as a
minimal counterexample.
Figure 13.4: Example of an arc kayles position, which is equivalent to a position in crossover-
mutation.
to k from B1 are swapped with the bits 1 to k of B2 , in particular, leading to the new bit
strings
Mutation involves choosing a single gene ci from either of the bit strings and flipping
it to 1 − ci . In both cases the move is legal if the total number of substrings of the form
01 and 10 increases.
Ruleset 4.2 (arc kayles [17]). Let G be a graph. On a player’s turn, they remove an
edge of G along with all edges incident to it.
Figure 13.5: Mutation: The result of a mutation move at ai in crossover-mutation (top right) and
the corresponding resulting position after a move in arc kayles (top left). Crossover: The result of a
crossover move at i − 1 in crossover-mutation (bottom right) and the corresponding resulting posi-
tion after a move in arc kayles (bottom left). Dashed lines represent the edges that were eliminated
by moving in arc kayles.
ilarly for B2 . We label the edges of GAK by the corresponding bit labels in B1 or B2 ,
respectively. For the crossover moves in GCM , if there exists a crossover move at ai ,
ai+1 and bi , bi+1 , then in GAK , there are vertices, call them vai ,ai+1 and vbi ,bi+1 , between
respectively labeled edges. Denote the edge connecting vai ,ai+1 and vbi ,bi+1 by ei,i+1 . See
Figure 13.4 for an example of the equivalence, in which we have represented the bits
as colored and uncolored vertices.
To show that GAK is equivalent to GCM via this construction, we need to show that
there exists a bijection between the options. In particular, that GCM − GAK = 0. Since
the rulesets are impartial, we consider GCM + GAK . Suppose the first player moves in
GCM with a mutation at ai . By the existence of this mutation this means that both ai−1
and ai+1 were the same as ai (if they exist); otherwise, the entropy would not have
increased. After the turn, neither can be mutated thereafter because again, it would
not increase the entropy. Also, this move disallows future crossover at ai because it
will not increase the entropy. Player 2 responds by removing the edge ai ∈ GAK . This
has the effect of removing all incident edges, in particular, ai−1 , ai+1 , ei−1,i , and ei,i+1 if
they exist (see Figure 13.5). If instead Player 1 chose a crossover move in GCM at position
i−1, this eliminates the possibility of future mutations at positions ai , ai−1 , bi , and bi−1 .
The corresponding move for Player 2 is to respond in GAK by removing the edge ei−1,i ,
which also removes all edges ai , ai−1 , bi , and bi−1 (see Figure 13.5).
240 | M. A. Huggan and C. Tennenhouse
If instead Player 1 moved in GAK , we simply reverse the roles in the above argument,
and Player 2 will always have a response. Thus Player 2 will win this game under nor-
mal play. Hence GCM and GAK are equivalent.
Theorem 4.4 ([4]). Let G be a position in arc kayles in the form of a 2 × n grid graph.
Then G has value 0 if n even and value ∗ if n odd. Furthermore, this game value does not
change under the addition of up to two tufts (i. e., induced stars whose center is a vertex
of the grid graph).
Theorem 4.5. Let G(k) be a position in arc kayles in the form of a 2 × k grid graph with
pendant edges adjacent to 3 or 4 of the four corners (see Figure 13.6). Then G(2k + 1) has
the game value ∗2 if k ∈ {0, 1} and ∗ if k ≥ 2, and G(2k) has the value 0 for all k ≥ 1 when
h0 is present.
Proof. Note that if k ≤ 1, then the possible values of G(2k + 1) are easily demonstrated
by exhaustion. The value of G(2k) is just as easily found to be in 𝒫 by considering
an involution strategy, whereby the second player responds to a play on edge e with
a play on the edge equivalent to e under 180° rotational symmetry. We now proceed
by induction on k to find the remaining values of G(2k + 1) whether or not edge h0 is
present.
Let e be an edge in G(2k + 1), and consider H(e) to be the option yielded by play
on e (see Figure 13.7). We demonstrate that no option of G(2k + 1) has value ∗.
H(h1 ) Play on edge x results in a graph of the form 2×(2k −1) with three pendant edges.
If k is sufficiently large, then this graph has value ∗ by inductive assumption,
and hence H(h1 ) does not have value ∗. Otherwise, the value can be checked
exhaustively for the base case of G(5) when k = 2 to have value ∗ with or without
the presence of h0 . Hence H(h1 ) does not have value ∗.
H(h2 ) Play on the edge x results in a position with value ∗ by Theorem 4.4. Therefore
H(h2 ) does not have value ∗.
Genetically modified games | 241
H(h3 ) If h0 is not present, then play on edge y yields a path with value ∗ disconnected
from a 2 × (2k − 2) grid graph with two pendant edges, which by Theorem 4.4
has value 0. If h0 is present, then play on edge z yields the sum of a small graph
with value ∗ and a 2 × (2k − 4) grid graph with two pendant edges. In both cases,
the resulting sums are ∗. Therefore H(h3 ) does not have value ∗.
H(h4 ) Here h4 can be any horizontal edge to the right of h3 . Play on edge w results in
a game with a sum of two positions with opposite parity and hence has value
∗ + 0 = ∗ by Theorem 4.4, so H(h4 ) does not have value ∗.
H(v1 ) This graph has value 0 by Theorem 4.4.
H(v2 ) If h0 is present, then we have the sum of a path with value ∗2 and a game with
value ∗ by Theorem 4.4. If h0 is not present, then the path has value ∗. So H(v2 )
has value ∗3 or 0.
H(v3 ) We invoke Theorem 4.4 yet again, as the resulting graph is a pair of grid graphs
with one or two pendant edges each, both with value ∗ or both with value 0.
Therefore H(v3 ) has value 0.
Since no option of G(2k + 1) has value ∗ and G(2k + 1) ∈ 𝒩 , we see that it has value ∗
for k ≥ 2.
Theorem 4.5 leads directly to the following corollary about a family of crossover-
mutation positions.
Corollary 4.6. The cm game composed of a length-n string of all 1s and a length-n string
of all 0s has value 0 if n is odd, ∗2 if n ∈ {2, 4}, and ∗ otherwise.
Proof. This position is equivalent to the arc kayles position G(n − 1) with h0 present,
as indicated in Theorem 4.5.
Ruleset 4.7 (cram [4]). In the impartial game cram, players take turns filling a pair of
empty orthogonally adjacent spaces in a grid. See Figure 13.8.
Figure 13.8: Example of a 3 × 5 cram position that is equivalent to the arc kayles and cm positions
in Figure 13.4.
The reader may recognize cram as the impartial version of domineering. All cm posi-
tions are also associated with 2×n cram positions, except for a few with extra pendant
edges that, if realized in cram, require a board of width at least three. This associ-
ation is produced by first considering the associated arc kayles position. See Fig-
ure 13.8 for a cram position, which is equivalent to the arc kayles position pictured
in Figure 13.4. Note that this position cannot be realized by a 2 × n cram position for
any n.
With a cm position (B1 , B2 ) with B1 = (a1 , . . . , an ) and B2 = (b1 , . . . , bn ), we associate
an arc kayles position as above, resulting in a subgraph of a 2×(n−1) grid graph with
up to four pendant edges at the corners. A 2 × n grid graph in arc kayles is equivalent
to a 2 × n grid in cram, as the removal of a vertical (horizontal) edge and its neighbors
relates to adding a vertical (horizontal) game piece to the cram board. A single vertex
missing from this position is equivalent to a blocked square in the associated grid, and
the pendant edges are associated with extra spaces, each of which shares an edge with
one of the four corner spaces, without sharing edges with any other spaces.
Most of the remaining cm positions are equivalent to 2×n positions in cram which,
while remaining unsolved, have been addressed in the literature [4]. It is worth noting
that all cm positions in which no crossover move is possible are simply represented
by a disjunctive sum of paths in arc kayles, whose values are known [15].
values. But we have also seen that it was solved through the use of genetic program-
ming, and therefore this method could prove useful in the future. At the very least, it
could be used to reduce the time and effort taken to conjecture formulas for Grundy
values.
We are curious whether or not genetic programming can be used for problems
within CGT that a mathematician simply examining a list of values is unlikely to solve.
To answer this, we suggest more efforts into this practice. It will be very useful, for
example, to compile a database of impartial combinatorial games with known and
as yet unknown solutions. This could help inform the choice of default functions to
include in future genetic programming attempts.
There are modifications that we suggest be made to future GP for CGT projects.
Firstly, it would be beneficial to develop a more robust fitness function. As there is
no obvious metric over the set of nimbers outside of the nim-sum, an analytical ap-
proach to metrics over impartial games would be helpful. Secondly, the method for
fitness employed in [16] does not use precomputed data points at all. Instead, the au-
thor determines the fitness of a program by comparing the computed outcome classes
of a set of positions with those of its options and relating the fitness to the number
of deviations from the basic tenets of impartial games that are found among these
computations. Something similar could be used for Grundy value programming, in-
volving the mex (minimum excludant) function. However, the distance between the
actual and computed values remains a possible stumbling block.
Bibliography
[1] M. H. Albert, R. J. Nowakowski, and D. Wolfe, Lessons in Play: An Introduction to Combinatorial
Game Theory, A K Peters, Ltd., Wellesley, MA, 2007.
[2] E. R. Berlekamp, J. H. Conway, and R. K. Guy, Winning Ways for Your Mathematical Plays,
Volume 1, second edition, A K Peters, Ltd., Natick, MA, 2001.
[3] E. R. Berlekamp, J. H. Conway, and R. K. Guy, Winning Ways for Your Mathematical Plays,
Volume 2, second edition, A K Peters, Ltd., Natick, MA, 2003.
[4] E. R. Berlekamp, J. H. Conway, and R. K. Guy, Winning Ways for Your Mathematical Plays,
Volume 3, second edition, A K Peters, Ltd., Natick, MA, 2003.
[5] E. R. Berlekamp, J. H. Conway, and R. K. Guy, Winning Ways for Your Mathematical Plays,
Volume 4, second edition, A K Peters, Ltd., Wellesley, MA, 2004.
[6] A. Bradford, J. K. Day, L. Hutchinson, B. Kaperick, C. E. Larson, M. Mills, D. Muncy, and
N. Van Cleemput, Automated conjecturing II: chomp and reasoned game play, J. Artificial
Intelligence Res. 68 (2020), 447–461.
[7] R. M. Brady, Optimization strategies gleaned from biological evolution, Nature 317 (1985),
804–806.
[8] J. H. Conway, On Numbers and Games, second edition, A K Peters, Ltd., Natick, MA, 2001.
[9] H. E. Dudeney, The Canterbury Puzzles (and Other Curious Problems), EP Dutton, New York,
1908.
[10] P. Galinier and J.-K. Hao, Hybrid evolutionary algorithms for graph coloring, J. Comb. Optim. 3
(1999), 379–397.
244 | M. A. Huggan and C. Tennenhouse
1 Introduction
Consider the following situation: two players, Alice and Bob, alternate to partition
a given finite number of positive integers into components of the form of a nontrivial
Euclidian division. Whoever will fail to follow the rule, because each number is a “1”,
loses the game. You are only allowed to split one number at a time. For example, if
Alice starts from the number 7, then her options are 1 + ⋅ ⋅ ⋅ + 1, 2 + 2 + 2 + 1, 3 + 3 + 1, 4 + 3,
5+2, and 6+1; here the “+” sign from arithmetic functions becomes the disjunctive sum
operator (a convenient game component separator) in the game setting. By observing
that we may remove any pair of the same numbers (by mimicking strategy) and we
may remove a one unless the option is the terminal position (since its set of options is
empty), the set of options from 7 simplifies to {1, 2, 4+3, 5+2, 6}. Suppose now that Alice
starts playing from the disjunctive sum 7 + 2. By the above analysis it is easy to find a
winning move to 2+2+2+1+2. What if she instead starts from the composite game 7+3?
We study two-player normal-play games defined with the nonnegative or positive
integers as the set of positions. The two players alternate turns, and if a player has no
move option, then he/she loses. At each stage of play, the move options are the same
independently of who starts. In combinatorial game theory, this notion is referred to as
impartial. Games terminate in a finite number of moves, and there is a finite number of
options from each game position, i. e., games are short. These game axioms will allow
us to use the famous theory discovered independently by Sprague [9] and Grundy [4],
which generalizes normal-play nim, analyzed by Bouton [2], into disjunctive sum play
of any finite set of impartial normal-play games. Let opt(G) denote the set of options
Douglas E. Iannucci, University of the Virgin Islands, St. Thomas, Virgin Islands, US
Urban Larsson, National University of Singapore, Singapore, Singapore, e-mail:
urban031@gmail.com
https://doi.org/10.1515/9783110755411-014
246 | D. E. Iannucci and U. Larsson
We will use the name of the arithmetic function and prepend the letters M or S, respec-
tively, for the move-to and subtract versions of a particular game. The following two
examples are excerpts from Section 2.1.
Example 1. If the players may move-to a divisor, then we get for example: from 6 the
options are 1, 2, and 3. From 7 the only option is 1. Here the divisors must be proper
divisors.
Example 2. If the players may subtract a divisor, we get for example: from 6 the op-
tions are 5, 4, 3, and 0, and from 7 the options are 0 and 6.
Before this paper, instances where number theory connects with impartial games
might individually have seemed like “lucky cases”. However, we feel that the relatively
large number of such examples justifies a more systematic study.1
1 See classical works such as Winning Ways [1] for results on impartial games coinciding with number
theory.
Game values of arithmetic functions | 247
Let us list some game rules induced by some standard arithmetic functions. When
there is only one single option, we may omit the set brackets.
1. The aliquot (divisor) games:
(a) maliquot. Move-to a proper divisor, i. e.,
opt(n) = {k : 1 ≤ k ≤ n, k ∤ n}.
opt(n) = {n − k : 1 ≤ k ≤ n, k ∤ n}.
3. The τ-games:2
(a) mtau. Move-to the number of proper divisors, i. e., opt(n) = τ′ (n).
(b) stau. Subtract the number of divisors, i. e., opt(n) = n − τ(n).
4. The totative (relative prime residue) and the nontotative games:3
(a) totative. Move-to any relatively prime residue, i. e.,
opt(n) = ϕ(n).
2 Here τ(n) counts the natural divisors of n: τ(n) = ∑d|n 1. The divisor function τ is multiplicative with
τ(pa ) = a + 1 for all primes p and natural numbers a. In one of our game settings, we use instead the
number of proper divisors, and so let τ′ = τ − 1, so that, in particular, τ′ (1) = 0 and τ′ (2) = 1 (here we
lose multiplicativity).
3 The move-to and subtract variations are the same, because (k, n) = 1 if and only if (n − k, n) = 1.
248 | D. E. Iannucci and U. Larsson
opt(n) = n − ϕ(n).
6. dividing. Divide the given number into at least two equal parts, i. e.,
opt(n) = {k⏟⏟⏟⏟⏟⏟⏟⏟⏟⏟⏟⏟
+k+ ⋅ ⋅ + k⏟⏟ : km = n, m > 1}.
⏟⏟⏟⋅⏟⏟⏟⏟⏟⏟⏟⏟⏟⏟
m k’s
7. dividing-and-remainder. Divide the given number into equal parts and a re-
mainder, which is smaller than the other parts and possibly 0, i. e.,
Example 3. Let 0 be the empty heap. Suppose that from a heap of size n > 0, the
players can remove the number of divisors of n. The option of n = 1 is 0. A heap of
size 2 also has 0 as an option, but 1 is the option of 3. The nim-sequence thus starts:
0, 1, 1, 0, 0, 1. The heap of size 5 has 3 as the option, for which the nim-value is 0. On
one heap, whereas play is trivial, the problem of determining the winner is as hard as
the complexity of the sequence.
Example 4. Let 0 be the empty heap. Suppose that from a heap of size n > 0 the players
can move-to the number of proper divisors of n. The option of n = 1 is 0. The heaps of
size two and three have moves to the heap with a single pebble. The number of proper
divisors of n = 4 is 2, and hence the option is 2. As for all primes, the option of n = 5 is
the heap of size one. Thus the nim-sequence starts: 0, 1, 0, 0, 1, 0.
Example 5. Consider binary games. Of course, even playing a disjunctive sum of bi-
nary games gives only binary values. Consider, for example, totient, where 𝒮𝒢 (2+3+
4+5) = 𝒮𝒢 (2)⊕𝒮𝒢 (3)⊕𝒮𝒢 (4)⊕𝒮𝒢 (5) = 1⊕0⊕0⊕1 = 0. Hence 2+3+4+5 is a second player
winning position. To see this in play, suppose that the first player selects the heap of
size 4 and moves to 2 + 3 + ϕ(4) + 5 = 2 + 3 + 2 + 5. Now 𝒮𝒢 (2 + 3 + 2 + 5) = 1 ⊕ 0 ⊕ 1 ⊕ 1 = 1,
which is a winning position for the player to move, and indeed, since every move
changes the parity, we have automatic, “random” optimal play even if we play a sum
of games, provided that they are all binary. In particular, if we play a disjunctive sum of
totient games, then the optimal strategy is to play any move. Hence these games seem
less interesting in that respect as 2-player games, but suppose that we instead play a
disjunctive sum of the totient game G with the totative game H. Now an efficient algo-
rithm for computing the binary nim-value (see Theorem 8) is interesting again. What
is a sufficient move in the first player winning position 7totient + 7totative ? (There are
exactly three winning moves.)
Example 6. Consider the game played from a heap of size n, where the options are to
play to any nonempty set of proper divisors of n. If n = 6, then the options are the
single heaps of size 1, 2, 3, respectively, the pairs of heaps 1 + 2, 1 + 3, and 2 + 3, and the
triple 1 + 2 + 3. A heap of size one has no option, and a heap of size two or three has a
heap of size one as option. Hence 𝒮𝒢 (6) = 2. The nim-value of each prime is one, and
so on.
250 | D. E. Iannucci and U. Larsson
Example 7. Consider the game played from a heap of size n, where the options are
to play to any finite set of relatively prime residues smaller than n. If n = 5, then the
options are all nonempty subsets of {1, 2, 3, 4}. In spite of the relatively large number of
options, in this particular case, the 𝒮𝒢 -computation becomes easy. A heap of size one
has no option, and so 𝒮𝒢 (1) = 0. Therefore 𝒮𝒢 (2) = 1, and so 𝒮𝒢 (3) = 2. A heap of size
4 has 1, 3, and 1 + 3 as options, and so 𝒮𝒢 (4) = 1. By this, obviously, 𝒮𝒢 (5) = 4. A heap
of size 6 has few options, and easily 𝒮𝒢 (6) = 1. A heap of size 7 has many options, and
likewise easily 𝒮𝒢 (7) = 8, the smallest unused power of two. This game is revisited in
Theorem 17.
The word “Play . . . ” (read: “Play is defined by . . .”) is intentionally left open for inter-
pretation. Here it will have one out of two meanings; either the players move-to the
numbers, or they subtract the numbers from the given heap (size). Items (iii) and (iv)
typically split a heap into several heaps to be played in a disjunctive sum of heaps.
Note that (iii) is binary, although it does not concern counting functions. The rulesets
induced by (iii) and (iv) are not listed above, but naturally build on items 1, 2, and 4.
We define them in their respective sections.
Some arithmetic functions directly induce a disjunctive sum of games, such as the
division algorithm or the factoring problem. For the ruleset on Euclidian division from
the first paragraph (Section 4.1), we conjecture that the relative nim-values 𝒮𝒢 (n)/n
tend to 0 with increasing heap sizes.
In Section 2, we study singleton games. In Section 3, we study counting games.
In Section 4, we study dividing games, where division induces a disjunctive sum of
games, and similarly for Section 5 with factoring games. In Section 6, we study dis-
junctive sum games on the full set induced by the arithmetic function. In Section 7, we
study powerset disjunctive sum games. Section 8 is devoted to some future directions.
For reference, let us include a table of studied rulesets in the order of appearance,
including some significant properties. The abbreviations are m-t: move-to, subtr.: sub-
traction, div.: divisor, rel.: relative, n.: number, pr.: problem, disj.: disjunctive. The so-
lution functions are defined in the respective sections, but let us list them here as well.
In particular, we encounter indexing functions, where numbers with certain property
are enumerated, starting with 1 for their smallest member, etc. In the table, we find
the following functions: Ω, number of prime divisors counted with multiplicity; ω,
the number of prime divisors counted without multiplicity; Ω2 , the number of prime
Game values of arithmetic functions | 251
divisors counted with multiplicity, unless the divisor is 2, which is counted without
multiplicity; v, usual 2-valuation; io , index of largest odd divisor; ip , index of smallest
prime divisor.
2 Singletons
This section concerns items 1, 2, and 4 from the introduction, the aliquots, aliquants,
and totatives.
hard. Here the arithmetic function is f (n) = {d : d | n, n ∈ ℕ0 }. In this section the set
of game positions is ℕ. Since all nonnegative integers divide 0, we do not admit 0 to
the set of game positions. The second game saliquot turns out to be somewhat more
interesting.
Let n ∈ ℕ. Then Ω(n) is the number of prime factors of n, counting multiplicities,
and v = v(n) is the 2-valuation of n = 2v m, where m is odd.
Example 8. From 6 the options are 1, 2, and 3. From 7 the only option is 1.
n opt(n) 𝒮𝒢(n)
1 ⌀ 0
2 1 1
3 1 1
4 1,2 2
5 1 1
6 1, 2, 3 2
7 1 1
8 1, 2, 4 3
Proof. We have that 0 = 𝒮𝒢 (1) = Ω(1), since there are no options from 1, and 1 has
no any prime factors. Suppose that n > 1 has k prime factors, counting multiplicities.
Then, for each x ∈ {1, . . . , k − 1}, there is a divisor of n corresponding to a move-to a
number with x prime factors. Since a player is not allowed to divide by n, the number
of prime factors decreases by moving, and so by induction there is no option of nim-
value k. By the mex-rule the result holds, 𝒮𝒢 (n) = k = Ω(n).
Example 9. The options of 6 are 5, 4, 3, and 0. From 7 the options are 0 and 6.
Game values of arithmetic functions | 253
Since 0 is always an option from n, it is clear that the nim-value of a nonzero po-
sition is greater than 0. The initial nim-values are as follows:
n opt(n) 𝒮𝒢(n)
0 ⌀ 0
1 0 1
2 0, 1 2
3 0, 2 1
4 0, 2, 3 3
5 0, 4 1
6 0, 3, 4, 5 2
7 0, 6 1
8 0, 4, 6, 7 4
Theorem 2. Consider saliquot. Then 𝒮𝒢 (0) = 0. Suppose that n > 0 and let n = 2k m,
where 2 ∤ m and k ≥ 0. Then 𝒮𝒢 (n) = v(n) + 1 = k + 1.
Proof. The case 0 | 0 is excluded in the definition of opt. Hence there is no move from
0 and so 𝒮𝒢 (0) = 0. Note that if n > 0, then 0 of nim-value 0 is an option.
Suppose that n is odd. Then, for all d | n, n − d is even. By induction the even num-
bers have nim-value greater than one. Hence, since 0 is an option, the mex-function
gives 0 + 1 = 1 as the nim-value of n = 20 m.
Suppose that n is even. Then n − 1, which is odd, is an option. It has nim-value 1
by induction. (Hence 𝒮𝒢 (n) ⩾ 2.) Let d = 2ℓ q ⩽ n, where we may assume that ℓ > 0,
since we are interested in the even options.
Since d is a divisor of n, we have that 0 ≤ ℓ ⩽ k and q | m with odd m > 1. We
get n − d = 2k m − 2ℓ q = 2ℓ (2k−ℓ m − q). The number x = 2k−ℓ m − q is odd, of nim-value
1 by induction, unless k = ℓ. In this case, if m = q, then the option is 0, so suppose
m > q. Since both m and q are odd, the option of n has a greater 2-valuation than n,
i. e., v(n) ≥ k + 1. Therefore no option has 2-valuation k, and hence by induction no
option has nim-value k + 1.
Since ℓ can be chosen freely in the interval 0 ≤ ℓ < k, by induction all nim-values
0 ≤ 𝒮𝒢 (n − d) ≤ k can be reached; since m > 1, we may take q < m. The result
follows.
Since all numbers divide 0, 0 does not have any options, and hence 𝒮𝒢 (0) = 0. On
the other hand, 0 does not divide any nonzero number, and hence 0 will be an option
from each number. The options are opt(n) = {d < n : d ∤ n, n ∈ ℕ0 }.
n opt(n) 𝒮𝒢(n)
0 ⌀ 0
1 0 1
2 0 1
3 0, 2 2
4 0, 3 1
5 0, 2, 3, 4 3
6 0, 4, 5 2
7 0, 2, 3, 4, 5, 6 4
8 0, 3, 5, 6, 7 1
In maliquant the 2-valuation plays an opposite role as in saliquot; here only the odd
part of n determines the nim-value. Let io : ℕ → ℕ be the index function for largest
odd factor of a given natural number; that is, if n = 2k (2m − 1), then i(n) = m. Clearly,
io (2m − 1) = m, and io (n) ≤ (n + 1)/2.
Lemma 1. For all n ∈ ℕ, the numbers in the set {n, . . . , 2n − 1} contribute all maximal
odd factor indices in the set {1, . . . , n}, that is, {io (x) : n ≤ x ≤ 2n − 1} = {1, . . . , n}.
Proof. We use induction on n. Assuming that the statement of the lemma holds for all
natural numbers up to n, we consider the set {n+1, . . . , 2n−1, 2n, 2n+1}. As io (2n) = io (n)
and io (2n + 1) = n + 1, it follows that {io (x) : n ≤ x ≤ 2n + 1} = {1, . . . , n, n + 1}
Theorem 3. Consider maliquant. Then 𝒮𝒢 (0) = 0, and 𝒮𝒢 (n) = io (n) for all n ∈ ℕ.
If we augment this set with the element 2k m, then by Lemma 1 the indices of its ele-
ments are {1, 2, . . . , 2k−1 m + 1}, which includes {1, 2 . . . , io (m) − 1} as a subset. However,
Game values of arithmetic functions | 255
opt(n) contains no elements of the form 2j m, and hence io (m) does not appear among
the elements io (x) for all x ∈ opt(n), whereas all elements of {1, . . . , io (m)−1} do appear.
Thus by induction hypothesis 𝒮𝒢 (n) = io (m).
Here we run into some mysterious sequences. We can only prove partial results. The
options are opt(n) = {n − d : d ∤ n, n ∈ ℕ0 }.
0 ⌀ 0 10 1, 2, 3, 4, 6, 7 2
1 ⌀ 0 11 1, 2, 3, 4, 5, 6, 7, 8, 9 5
2 ⌀ 0 12 1, 2, 3, 4, 5, 7 4
3 1 1 13 1, . . . , 11 6
4 1 1 14 1, . . . , 6, 8, 9, 10, 11 6
5 1, 2, 3 2 15 1, . . . , 9, 11, 13 7
6 1, 2 1 16 1, . . . , 7, 9, 10, 11, 13 7
7 1, 2, 3, 4, 5 3 17 1, . . . , 15 8
8 1, 2, 3, 5 3 18 1, . . . , 8, 10, 11, 13, 14 4
9 1, 2, 3, 4, 5, 7 4 19 1, . . . , 17 9
The odd heap sizes turn out to be simple. We give some more nim-values for even heap
sizes n = 0, 2, . . .:
𝒮𝒢 (n) = 0, 0, 1, 1, 3, 2, 4, 6, 7, 4, 7, 5, 10, 12, 10, 13, 15, 8, 13, 9, 17, 17, 16, 11, 22, . . . .
𝒮𝒢 (n)/n = 0, 1/4, 1/6, 3/8, 1/5, 1/3, 3/7, 7/16, 2/9, 7/20, 5/22, 5/12, 6/13, 5/14,
13/30, 15/32, 4/17, 13/36, 9/38, 17/40, 17/42, 4/11, 11/46, 11/24, . . . .
Sorting the ratios 𝒮𝒢 (n)/n by size, we find that the associated nim-values[n, 𝒮𝒢 (n)]
for the smallest ratios, [6, 1], [10, 2], [18, 4], [22, 5], [34, 8], [38, 9], [46, 11], satisfy (n −
2)/𝒮𝒢 (n) = 4. The half of each heap size in this sequence is odd, and we get the odd
numbers 3, 5, 9, 11, 17, 19, 23, . . . . We have not investigated these patterns further, but
we believe that 𝒮𝒢 (n) ≥ (n − 2)/4 for all n. Indeed, by plotting the first 1000 nim-values
in Figure 14.1 this lower bound appears to continue.
n−1
Theorem 4. Consider saliquant. Then 𝒮𝒢 (0) = 0, and if n is odd, then 𝒮𝒢 (n) = 2
.
Moreover, 𝒮𝒢 (n) < n/2.
Proof. Suppose that the statement holds for all m < n. If n = 2x + 1, then each nonneg-
ative integer smaller than x is represented as a nim-value, and specifically, for each
256 | D. E. Iannucci and U. Larsson
odd number 2y + 1 with y < x, 𝒮𝒢 (2y + 1) = y. Moreover, each odd number is an option
of x, since each even integer is a nondivisor of n = 2x + 1. Therefore we use that by
induction each even number smaller than n has a smaller nim-value, and we are done
with the first part of the proof.
Suppose next that n = 2x. Then since both 1 and 2 are divisors, we know that the
largest option is smaller than 2x −2. By induction the nim-value of any number smaller
than 2x − 2 is smaller than x − 2, that is, 𝒮𝒢 (2x − 3) = 𝒮𝒢 (2(x − 2) + 1) is the upper bound
for a nim-value of an option of 2x.
n opt(n) 𝒮𝒢(n)
1 ⌀ 0
2 1 1
3 1, 2 2
4 1, 3 1
5 1, 2, 3, 4 3
6 1, 5 1
7 1, . . . , 6 4
8 1, 3, 5, 7 1
Game values of arithmetic functions | 257
We have the following result. The solution involves the function ip , the index of the
smallest prime divisor of a given number, where the prime 2 has index 1.
Theorem 5. Consider totative. The nim-value of n > 1 is the index of the smallest prime
divisor of n, and 𝒮𝒢 (1) = 0.
Proof. There is no move from 1, because the only number relatively prime with 1 is 1,
and options have smaller size than the number. Hence, by the definition of the mex-
function, 𝒮𝒢 (1) = 0. Also, 𝒮𝒢 (2) = 1, since the only relatively prime number of 2 is 1,
which has nim-value 0, and ip (2) = 1. Suppose that the result holds for all numbers
smaller than n. From n a player can only access a smaller number with no common
divisor to n. Therefore none of its options has the same smallest prime divisor. This is
one of the properties of the mex-rule.
Thus the index of the smallest prime divisor of n will be chosen as nim-value if
each prime with a smaller index appears as an option. However, the set of relatively
prime numbers smaller than n contains in particular all the relatively prime numbers
of the smallest prime divisor of n and hence all the primes that are smaller than the
smallest prime factor of n. By induction this is the desired set of nim-values, since the
move-to 1 (of nim-value 0) is always available.
This is the sequence A055396 in Sloane [8]: “Smallest prime dividing n is a(n)th
prime (a(1) = 0).”
0 ⌀ 0 10 0, 2, 4, 5, 6, 8 5
1 0 1 11 0 1
2 0 1 12 0, 2, 3, 4, 6, 8, 9, 10 6
3 0 1 13 0 1
4 0, 2 2 14 0, 2, 4, 6, 7, 8, 10, 12 7
5 0 1 15 0, 3, 5, 6, 9, 10, 12 4
6 0, 2, 3, 4 3 16 0, 2, 4, 6, 8, 10, 12, 14 8
7 0 1 17 0 1
8 0, 2, 4, 6 4 18 0, 2, 3, 4, 6, 8, 9, 10, 12, 14, 15, 16 9
9 0, 3, 6 2 19 0 1
This sequence does not yet appear in OEIS [8], but curiously enough, a nearby se-
quence is A078898, “Number of times the smallest prime factor of n is the smallest
prime factor for numbers ≤ n; a(0) = 0, a(1) = 1.” For n ≥ 2, a(n) tells in which column
258 | D. E. Iannucci and U. Larsson
Let us sketch a few nim-value subsequences. The primes have nim-value 1, and the
prime squares have nim-value 2. The numbers with close prime factors, “almost
squares”, appear to have almost constant nim-values. On the other hand, some num-
bers in arithmetic progressions appear to have nim-values in almost arithmetic pro-
gressions; for example, 𝒮𝒢 (2n) = n for all n. We give the exact statements in Theorem 6.
For subsequences of the natural numbers s, let the asymptotic relative nim-value
be
𝒮𝒢 (s(n))
rs = lim
n s(n)
Moreover, for n ∈ ℕ,
(v) 𝒮𝒢 (2n) = n;
(vi) if n ≡ 3 (mod 6), then 𝒮𝒢 (n) = ⌊(n + 1)/4⌋;
(vii) if n ≡ 5, 25 (mod 30), then 𝒮𝒢 (n) ∈ {⌊n/10⌋, ⌈n/10⌉}.
Lastly, for n ∈ ℕ,
(viii) 𝒮𝒢 (n) ≤ n/2;
(ix) if n is odd, then 𝒮𝒢 (n) ≤ (n + 1)/4.
Proof. Note that 𝒮𝒢 (0) = 0 implies 𝒮𝒢 (n) > 0 if n > 0. The induction hypothesis
assumes all the items. Item (v) takes care of the case of even n, so in all other items,
we may assume that the smallest prime dividing n is greater than 2. Note that (ix) and
(v) imply (viii).
For (i), the only nonrelatively prime number of a prime is 0. Hence 𝒮𝒢 (p) = 1
if p is prime. If n = pm is not a prime, then there is a move to the prime divisor p, a
nonrelative prime, and there is a move to 0. Hence the nim-value is greater than one.
For (ii), we consider prime squares p2 and note that each option is of the form np,
0 ≤ n ≤ p − 1. In particular, there are moves p2 → 0 and p2 → p, as noted in the first
paragraph. Moreover, by induction we assume that 𝒮𝒢 (m) = 2 if and only if 1 < m < p2
is a prime square. Then m ≠ np, and so by the minimal exclusive algorithm, 𝒮𝒢 (p2 ) = 2.
For the other direction, we are done with the cases 0, 1, and primes. Consider the com-
posite n = pm, not a prime square, where p is the smallest prime factor. Then there is
a move to p2 , and hence 𝒮𝒢 (n) ≠ 2.
For (iii), we begin by proving that 𝒮𝒢 (n) ∈ {3, 4} if n = pi pi+1 or n = 8, where the
nim-value is three if and only if the index of the smaller prime is odd. The base case
is 𝒮𝒢 (2 ⋅ 3) = 3, where the exception n = 8 is by inspection. For the generic case, each
option is of the form npi , 0 ≤ n ≤ pi+1 − 1, or npi+1 , 0 ≤ n ≤ pi − 1. In particular, there is a
move to pi (and to pi+1 ) of nim-value one, and there is a move to p2i of nim-value 2. We
must show that there is no move to another prime pair of the same form, i. e., pj pj+1
with j of the same parity as i. Observe that there is a move to pi−1 pi with j + 1 = i,
but there is no move to any other almost square pj pj+1 . By induction this observation
suffices to find a move to nim-value 3 if i is even. For the other direction, we must show
that 𝒮𝒢 (n) ∈ ̸ {3, 4} if n is not an almost square. We are done with the cases where n is
a prime or a prime square. Suppose that n = px is not of the mentioned form, where
p is the smallest prime factor of n. The case p = 2 is dealt with in item (v), so let us
assume that p > 2. Then there is a move to pq (nonrelative prime with n), where q is
the smallest prime larger than p, because by assumption, x > q, and there is a move
to pq, where q is the largest prime smaller than p.
For (iv), we study the case n = pi pi+2 . If i = 1, then n = 10, and 𝒮𝒢 (10) = 5.
If i = 2, then n = 21, and, by inspection, 𝒮𝒢 (21) = 5. For the general case, among the
options, we find 0, pi , p2i , pi pi+1 , pi+1 pi+2 . Hence by the previous paragraphs the options
attain all nim-values smaller than 5. Next, suppose that i ≡ 1, 2 (mod 4), and we must
260 | D. E. Iannucci and U. Larsson
show that there is no option of the same form to create nim-value 5. Each option is
a multiple of one of the primes pi and pi+2 . The only possibility would be the option
pi−2 pi . However, i − 2 ≡ 0, 3 (mod 4). Hence no option has nim-value 5. The analogous
argument suffices to show that no option has nim-value 6 if i ≡ 0, 3 (mod 4), i ≥ 3.
The particular case n = 12 = 2 × 2 × 3 is not an option since i ≥ 3. On the other hand, the
argument shows that there is an option to nim-value 5. Consider the other direction.
Suppose that n = pi x, where pi > 2 is the smallest prime in the factorization of n. (The
case p = 2 is dealt with below.) If x > pi+2 , then there is a move to pi pi+2 , and if i ≥ 3,
then there is a move to pi−2 pi . If i = 2, then there is a move to 12 of nim-value 6. That
concludes this case. If pi < x < pi+2 , then x = pi+1 , since pi is the smallest prime in the
decomposition of n, and we are done with this case.
For (v), we verify that 𝒮𝒢 (2n) = n for all n. The options are of the form 2j with
0 ≤ j ≤ n, and so induction on (v) gives that each nim-value smaller than n can be
reached. Moreover, induction on (viii) gives that nim-value n does not appear among
the options.
For (vi), we must prove that if n ≡ 3 (mod 6), then 𝒮𝒢 (n) = ⌊(n+1)/4⌋. The claimed
nim-value sequence for the positions 3, 9, 15, 21, 27, . . . is ⌊(3 + 1)/4⌋, ⌊(9 + 1)/4⌋, ⌊(15 +
1)/4⌋, . . . , which is 1, 2, 4, 5, 7, 8, . . . . Clearly, n has each smaller position of the same
form as an option. Precisely, the multiples of 3 are missing in the nim-value sequence.
However, by induction on (v) the multiples of 6 have nim-values multiples of 3, and
indeed, all multiples of 6 smaller than n are options of n. By induction on item (ix),
since n is odd, the nim-value ⌊(n + 1)/4⌋ does not appear among its options. The proof
of (vii) is similar to (vi) but more technical, so we omit it.
Item (viii) follows directly by induction (for example, if n is even, then n − 2 is the
largest option, and 𝒮𝒢 (n − 2) ≤ n/2 − 1).
For item (ix), assume that p > 2 is the smallest prime divisor of n. The cases with
p ≤ 5 have already been proved in items (vi) and (vii). Hence p > 5. It follows that
n
with t = ⌊ 2p ⌋, tp + 3 < n/2 < (t + 1)p − 3. It follows that the nim-value ⌊ n+1 4
⌋ cannot be
reached from n by options of the form in (v). On the other hand, it cannot be reached
by moving to an odd number, since by induction options n − 2p or smaller produce too
small nim-values.
3 Counting games
This section concerns rulesets as in item (i) in the introduction. Binary games have
only one option per heap, so the decision problem reduces to which heap to move in.
The nim-value of a binary game is binary, that is, each nim-value ∈ {0, 1}; the nim-value
of a given disjunctive sum of binary games is 0 if and only if the number of heaps of
nim-value 1 is even. Of course, the nim-value sequence for any given ruleset is valid in
the much larger context of all normal-play combinatorial games.
Game values of arithmetic functions | 261
ϕi (n) = 2. (14.1)
This lets us define the class of n as C(n) = i when (14.1) holds, and otherwise C(1) =
C(2) = 0.
Theorem 7 ([7]). Let m, n ∈ ℕ. If n is odd, then C(n) = C(2n), and otherwise C(n) + 1 =
C(2n). In general, if either m or n is odd, then C(mn) = C(m) + C(n). Otherwise, that is, if
both m and n are even, then C(mn) = C(m) + C(n) + 1.
This is a binary game. Indeed, there is only one option from n, namely ϕ(n), the num-
ber of relatively prime residues of n.
n opt(n) 𝒮𝒢(n)
1 ⌀ 0
2 1 1
3 2 0
4 2 0
5 4 1
6 2 0
7 6 1
8 4 1
262 | D. E. Iannucci and U. Larsson
where the commas are for readability. Each game component has a forced move, which
if played alone may be regarded as an automaton. Starting from 8, for example, the
iteration of ϕ gives the sequence of moves 8 → 4 → 2 → 1, and the nim-sequence, of
course, is alternating between 0s and 1s, terminating with the 0 at position 1. If played
on a disjunctive sum of totient, the nim-value sequence is of course also binary, al-
ternating between 0s and 1s, and it is 0 if and only if there is an even number of heaps
of nim-value 1. Suppose that we play 7t + 7s , where the first 7 is totient, and the sec-
ond 7 is subtraction{1, 2}, with nim-value sequence 0, 1, 2, 0, 1, 2, 0, . . . , say with sink
number 1 on both components. Then there are exactly two winning moves to 6t + 7s or
7t + 5s . Intelligent play from a general position mt + ns requires full understanding of
totient.
Theorem 8. Consider totient, and let C be as in Theorem 7. Then 𝒮𝒢 (1) = 0, and for
n > 1, 𝒮𝒢 (n) = C(n) + 1 (mod 2).
In general, it suffices to compute the parity of C(n) and, given the factorization
of n, apply Theorem 7. For example, C(23 ) = C(2) + C(22 ) + 1 = 3C(2) + 2 = 2, and
without looking into the table, we get 𝒮𝒢 (8) = 1. For another example, if n = 2 ⋅ 37 ⋅ 11,
then C(n) = 7C(3) + C(11) = 7 + 3 = 10, since ϕ(3) = 2 and ϕ3 (11) = 2. Therefore
𝒮𝒢 (48114) = (C(48114) + 1) (mod 2) = 11 (mod 2) = 1, and we find a unique winning
move 48114t + 3s → 48114t + 2s , where s is still subtraction{1, 2}.
From a given number n, subtract the number of relatively prime numbers smaller
than n. We cannot adapt Theorem 8 because it relies on iterations where a player in-
stead moves-to this number, and the authors have not yet found a similarly efficient
tool. Let us list the initial options and nim-values. An alternative way to think of the
options is to move-to the number of nonrelative primes, including the number. The
nim-values alternate for heaps that are powers of primes starting with 𝒮𝒢 (p0 ) = 0.
This happens because the number of nonrelatively prime numbers smaller than or
equal to a prime power pk is pk−1 . Hence 𝒮𝒢 (pk ) = 0 if and only if k is even. Since ϕ
is multiplicative, it is easy to compute f (n) = n − ϕ(n) for any n or get a formula for f
for any given prime decomposition of n. However, f is not multiplicative, which limits
the applicability of such formulas. In some particular cases, we can use the proximity
to powers of primes for fast computation of the nim-value. Take the case of n = pk q
Game values of arithmetic functions | 263
for some distinct primes p and q. Then f (n) = n − ϕ(n) = pk−1 (q + p − 1). Whenever
q + p − 1 is a power of the prime p, the nim-value of n is immediate by the parity of
the new exponent. Take, for example, p = 2 and q = 7. Then p + q − 1 = 23 , and so we
can find the nim-value of, for example, n = 7168 = 7 ⋅ 210 , which gives the exponent
9 + 3 = 12, and so 𝒮𝒢 (7168) = 0. Similarly, with p = 3 and q = 7, we can easily compute
𝒮𝒢 (413343) = 1, because p + q − 1 = 32 and 413343 = 7 ⋅ 310 .
Let “dist” denote the number of iterations of f to an even power of a prime. We get
the following suggestive table of the first few nim-values. We leave a further classifi-
cation of dist as an open problem.
1 ⌀ 0 0
2 1 1 1
3 1 1 1
4 2 0 0
5 1 1 1
6 4 1 1
7 1 1 1
8 4 1 1
9 3 0 0
10 6 2 0
11 1 1 1
12 8 2 0
13 1 1 1
14 8 2 0
15 7 2 0
16 8 0 0
Consider mtau, where the single option is the number of proper divisors.4 A heap of
size one has no option, so the nim-value sequence starts at 𝒮𝒢 (1) = 0. Let us list the
first few nim-values.
4 Note that if we remove the word proper here, then both 1 and 2 become loopy, and thus all games
would be drawn. See also Section 8 for some more reflections on “loopy” or “cyclic” games.
264 | D. E. Iannucci and U. Larsson
n opt(n) 𝒮𝒢(n)
1 ⌀ 0
2 1 1
3 1 1
4 2 0
5 1 1
6 3 0
7 1 1
8 3 0
9 2 0
Note that each prime has nim-value 1 because they have only one proper divisor. From
this small table we may deduce many more nim-values. The first few 0-positions are
1, 4, 6, 8, 9, 10, 12, 14, 15, 18, 20, 21, 22, 24, 25, 26, 27, 28, 30, . . . .
Note that 16 is the first composite number that is not included, 36 is the second one,
and then 48, 80, 81, 100, etc. What is special about these composite numbers?
The sequence of all ones has some resemblance to the sequence of all numbers
with a nonprime number of proper divisors. As mentioned, 16 and 36 are the first
composite members of this sequence. These two numbers are the smallest compos-
ite numbers with a composite (nonprime) number of proper divisors; such numbers
generalize the primes, because primes also have a nonprime number of proper divi-
a a
sors. We are interested in the smallest number n = p1 1 ⋅ ⋅ ⋅ pk k for which τ(n) − 1 =
(a1 + 1) ⋅ ⋅ ⋅ (ak + 1) − 1 ∈ {16, 36, 48, 80, . . .}, that is, the smallest number n such that
(a1 + 1) ⋅ ⋅ ⋅ (ak + 1) ∈ {17, 37, 49, 81, . . .}. An obvious candidate is n = 216 with a1 = 16
and otherwise ai = 0. However, it turns out that n = 26 ⋅ 36 = 46656 < 216 gives
τ(46656) = (6 + 1)(6 + 1) = 49, and this is indeed the smallest such number. Thus we
have the following observation.
Observation 1. Consider mtau. If n < 46656, then 𝒮𝒢 (n) = 1 if and only if n consists
of a nonprime number of divisors.
We note that neither sequence is listed in OEIS (see Section 3.3.1 for a listed similar
sequence).
Consider stau, where the single option is the number minus its number of divi-
sors. This variation has a mysterious nim-sequence beginning with 𝒮𝒢 (0) = 0.
A heap of size one has one divisor with an option to zero. A heap of size two has
two divisors, and hence the option is zero, and so on: 0, 1, 1, 0, 0, 1, 0, 0, 1, 1, 1, 0, 1, 1, 0,
Game values of arithmetic functions | 265
1, 1, 0, 0, 1, 1, 1, . . . . The 1s occur at
0, 1, 1, 0, 1, 0, 1, 0, 0, 0, 1, 0, 1, 0, 0, 1, . . . .
2, 3, 5, 7, 11, 13, 16, 17, 19, 23, 24, 29, 31, 36, 37, 40, 41, . . . .
The number 64 is in the sequence, and this distinguishes it from A026478. Still it is
not exactly A167175, since not all numbers with a nonprime number of prime divisors
are included. The sequences coincide until 216 − 1 though, since the first such number
to be excluded is 216 . Via a similar (but easier) reasoning as in Section 3.2.1, we have
the following observation.
Observation 2. Consider mΩ. If n < 216 , then 𝒮𝒢 (n) = 1 if and only if n consists of a
nonprime number of prime divisors counted with multiplicity.
Here we consider the ruleset sΩ “subtract the number of prime divisors”. A heap of size
one has nim-value zero by definition. A heap of size two has a move to a heap of size
one and has nim-value one. A heap of size three has a move to a heap of size two and
a a a
5 That is, if n has canonical form n = p1 1 p2 2 ⋅ ⋅ ⋅ pk k , then Ω(n) = a1 + a2 + ⋅ ⋅ ⋅ + ak and ω(n) = k.
266 | D. E. Iannucci and U. Larsson
Observation 3. If n < 7!, then 𝒮𝒢 (n) = 1 if and only if n contains exactly one distinct
factor.
The nim-value sequence of sω “subtract the number of distinct prime divisors” starts
at one, which does not have any prime divisor, and hence of nim-value zero. Next, two
has the option one, three has the option two, and four has the option three. The first
few nim-values are:
0, 1, 0, 1, 0, 0, 1, 0, 1, 1, 0, 0, 1, 1, 0, 1, 0, 0, 1, 1, 0, 0, 1, . . . ,
with ones at indices 2, 4, 7, 9, 10, 13, 14, 16, 19, 20, 23, . . . . Neither of these sequences ap-
pears in OEIS.
4 Dividing games
The ruleset dividing deploys the notion of a disjunctive sum in their recursive defini-
tion, that is, an option is typically, with some exception, a disjunctive sum of games.
A reference that goes into detail of such games is [3].
the current number into equal parts, and, as usual, we write “+” to separate parts into
new game components. To avoid long chains of components, we use multiplicative
notation in the sense that x × y means y copies of x (i. e., x + ⋅ ⋅ ⋅ + x). In this notation,
addition is commutative, but multiplication is not. For example, opt(10) = {5 × 2, 2 ×
5, 1 × 10}. The current player moves in precisely one of the components and leaves the
other ones unchanged. For example, a move from 5 + 5 is to 5 + 1 × 5 = 5 (because no
move is possible from 1 × 5), and by symmetry this is the only admissible move. The
number of options is τ(n)−1, and here is the table of the first few options together with
its nim-values:
n opt(n) 𝒮𝒢(n)
1 ⌀ 0
2 1+1 1
3 1×3 1
4 2 × 2, 1 × 4 1
5 1×5 1
6 3 × 2, 2 × 3, 1 × 6 2
7 1×7 1
8 4 × 2, 2 × 4, 1 × 8 1
The options simplify by mimicking strategy and by a heap of size one being terminal:
n opt(n) 𝒮𝒢(n)
1 ⌀ 0
2 1 1
3 1 1
4 1 1
5 1 1
6 1, 2 2
7 1 1
8 1 1
Let Ω2 (n) denote the number of prime factors of n, where the powers of 2 are counted
without multiplicity, and the powers of odd primes are counted with multiplicity.
Proof. 𝒮𝒢 (1) = 0, and 1 has no any prime components. Suppose that n is a power of
two. Then 𝒮𝒢 (n) = 1, since each option allows the mimic strategy. Similarly, if n is a
prime, then 𝒮𝒢 (n) = 1. Suppose that n = 2k p1 ⋅ ⋅ ⋅ pj with k ∈ ℕ0 and each pi odd. We
use induction to prove that 𝒮𝒢 (n) = j if k = 0 and 𝒮𝒢 (n) = j + 1 otherwise. If k = 0, then
n can be split into an odd number of components each having m prime factors for each
m ∈ [1, j − 1]. Induction and the nim-sum together with the mex-rule give the result in
268 | D. E. Iannucci and U. Larsson
this case. Similarly, if k > 0, then n can be split into an odd number of components of
m prime factors for each m ∈ [1, j], which proves that 𝒮𝒢 (n) = j + 1 in this case.
n opt(n) 𝒮𝒢(n)
1 ⌀ 0
2 1+1 1
3 2 + 1, 1 × 3 2
4 3 + 1, 2 × 2, 1 × 4 1
5 4 + 1, 3 + 2, 2 × 2 + 1, 1 × 5 2
6 5 + 1, 4 + 2, 3 × 2, 2 × 3, 1 × 6 3
7 6 + 1, 5 + 2, 4 + 3, 3 × 2 + 1, 2 × 3 + 1, 1 × 7 2
8 7 + 1, 6 + 2, 5 + 3, 4 × 2, 3 × 2 + 2, 2 × 4, 1 × 8 3
An even number of heaps of the same sizes reduces to a heap of size one. A heap of
size one in a disjunctive sum gets removed. We get an equivalent reduced table:
n opt(n) 𝒮𝒢(n)
1 ⌀ 0
2 1 1
3 2, 1 2
4 3, 1 1
5 4, 3 + 2, 1 2
6 5, 4 + 2, 2, 1 3
7 6, 5 + 2, 4 + 3, 2, 1 2
8 7, 6 + 2, 5 + 3, 2, 1 3
9 8, 7 + 2, 6 + 3, 5 + 4, 3, 1 4
10 9, 8 + 2, 7 + 3, 6 + 4, 3, 2, 1 3
Game values of arithmetic functions | 269
Note that when we remove pairs of equal numbers, sometimes we must add the option
“1” to symbolize a move to a terminal position of nim-value 2 + 2 = 0. From this table
we may deduce that the game 7 + 3 from the first paragraph in the paper is indeed a
losing position. divide-and-residue has a mysterious nim-sequence, as depicted in
Figure 14.3.
Figure 14.3: The initial 20000 nim-values of divide-and-residue. They just about touch the nim-
value 28 = 256.
Here are the 50 first nim-values, of the form [heap size, nim-value]:
[1, 0], [2, 1], [3, 2], [4, 1], [5, 2], [6, 3], [7, 2], [8, 3], [9, 4], [10, 3], [11, 4], [12, 3], [13, 4], [14, 3],
[15, 4], [16, 3], [17, 4], [18, 5], [19, 4], [20, 5], [21, 3], [22, 5], [23, 4], [24, 2], [25, 1], [26, 5],
[27, 6], [28, 5], [29, 6], [30, 2], [31, 6], [32, 5], [33, 3], [34, 8], [35, 9], [36, 8], [37, 9], [38, 8],
[39, 9], [40, 8], [41, 9], [42, 4], [43, 9], [44, 4], [45, 9], [46, 8], [47, 9], [48, 4], [49, 9], [50, 4].
Early nim-values tend to be odd for heaps of even size, and even for those of odd
size. By an elementary argument we get: the heap of size 25 is the largest heap of nim-
value one, and we can prove the analogous statement for a few more small nim-values.
We conjecture that any fixed nim-value occurs finitely many times.
For the upper bound, the nim-values seem to be bounded by n3/5 for sufficiently
large heap sizes n. An empirical observation is that the growth of nim-values appears
to be halted at powers of two. For example, the nim-value 22 starts to appear at heap
size 9 but does not increase beyond 22 + 20 until the heap of size 27.
There is no big surprise that this game is hard, since it is an extension of grundy’s
game [1, 8]. Indeed, the options of divide-and-residue in which the divisor d is
greater than n/2 correspond to the rule of splitting a heap into the two unequal parts
of grundy’s game. If we define the ruleset complement-grundy by requiring that
k ≥ 2 in divide-and-residue, then we can prove the second statement in Conjec-
ture 1 for this new game. Let us tabulate the first few nim-values, where options are
displayed in reduced form:
n opt(n) 𝒮𝒢(n)
1 ⌀ 0
2 1 1
3 1 1
4 1 1
5 1 1
6 2, 1 2
7 2, 1 2
8 2, 1 2
9 3, 2, 1 2
10 3, 2, 1 2
11 3, 3 + 2, 2, 1 2
Figure 14.4 shows that the initial regularity of nim-values is replaced by more com-
plexity further down the road, although not as severely as for divide-and-residue.
Note that the two games appear to share some geometric properties such as a local
stop of nim-value growth at powers of two and a bounded number of occurrences for
each nim-value. In this case though, some nim-values do not appear, such as 12, 15, 20,
etc. We do not yet know if the omitted nim-values can be described by some succinct
formula, and we do not even know if the occurrence of each nim-value is finite.
Proof. Consider the nim-value 2k . If it does not appear, then we are done. Suppose it
appears for the first time at heap size nk . By the mex rule, if a nim-value is greater than
2k , then it must have nim-value 2k in its set of options. By the rules of game this can
only happen for a heap of size m ≥ 3⋅nk . In particular, this holds for the nim-value 2k+1 ,
which occurs for the first time at m = nk+1 , say. Thus, for nim-values that are powers
of two, we get 32 𝒮𝒢 (nk )/nk ≥ 𝒮𝒢 (nk+1 )/nk+1 . This upper bound holds for arbitrary nim-
values, since the lower bound on where the power of two 2k+1 can appear is the same
lower bound where any other nim-value greater than 2k may appear.
The patterns of the nim-values of these rulesets are displayed in Figure 14.5. For varia-
tion 1 (to the left), we prove that for heaps greater than 1, the nim-sequence coincides
with OEIS A003602: if n = 2m (2k − 1) for some m ≥ 0, then a(n) = k. The Sprague–
Grundy sequence starts at a heap of size one with nim-values as follows:
Namely, divide-throw-residue has the same solution as maliquant, the game where
the options are the nondivisor singletons; recall Theorem 3, where this result is ex-
pressed as an index function io , the index of the largest odd divisor.
272 | D. E. Iannucci and U. Larsson
Proof. Observe that the options in the interval [⌊n/2⌋ + 1, n − 1] are the same as for
maliquant. Assume first that n is even. Then n/2 + n/2 is an option in divide-throw-
residue, but n/2 is not an option in maliquant. However, n/2 + n/2 only contributes
the nim-value 0 and may be ignored. Consider next the disjunctive sum m+⋅ ⋅ ⋅+m with
an odd number of components adding up to n. Then there is a power of 2, say 2k , such
that 2k m ∈ [⌊n/2⌋ + 1, n − 1], i. e., 2k m ∤ n. So, by induction, 𝒮𝒢 (m) = 𝒮𝒢 M (2k m), where
the M indicates maliquant. On the other hand, there are options of maliquant of the
form m ∤ n, m < n/2. They do not have a match in divide-throw-residue. However, as
we saw in the proof of Theorem 3, they do not contribute to the nim-value computation
in maliquant. The case of odd n is similar.
Example 12 (s-factoring). Let n = 12. Then the set of options is {6+10, 9+8, 10+10+9}.
The nim-sequence starts:
0, 0, 1, 1, 1, 1, 1, 1, 2, 1, 1, 1, 2, 1, 1, 1, 1, 1, 2, 1, 2, 1, 1, 1, 1, 1,
where the first heap is the empty heap, and the second 0 is due to that 1 does not have
any prime factors. 𝒮𝒢 (12) = 2.
Game values of arithmetic functions | 273
Theorem 13. Consider m-factoring, and let n ⩾ 2, where each option is a nontrivial
disjunctive sum of a factoring of n. Then 𝒮𝒢 (n) = Ω(n) − 1. If no two distinct components
may contain the same prime number, then 𝒮𝒢 (n) = ω(n) − 1.
n opt(n) 𝒮𝒢(n)
1 ⌀ 0
2 1 1
3 1 1
4 1+2 0
5 1 1
6 1+2+3 1
7 1 1
8 1+2+4 0
9 1+3 0
6 Obviously, we need to exclude the divisor n | n; the word “proper” is implicit in the naming.
274 | D. E. Iannucci and U. Larsson
Theorem 14. Consider fullset maliquot. Then 𝒮𝒢 (n) ∈ {0, 1}, and 𝒮𝒢 (n) = 1 if and
only if n > 1 is square-free.
Let us indicate the idea of the proof. The nim-value 𝒮𝒢 (4) = 0 because the only
nonunit proper divisor 2 is square-free, and 𝒮𝒢 (8) = 0, because there is exactly one
square-free proper divisor, namely 2. In the proof, we will use the idea that 𝒮𝒢 (n) = 0
if and only if n has an even number of square-free proper divisors.
j
∑ ( ), (14.2)
1≤i<j
i
is even, where j is the number of prime factors in n (this holds both for even and odd n).
By induction each such individual component divisor has nim-value 1. Note that by
moving in one such divisor the number of components in the disjunctive sum changes
parity; if moved in a prime, then the prime is deleted, and if moved in pq, then this
component splits to p + q, and so on.
By combining these observations we prove the general case of an arbitrary prime
factorization. Assume that n contains a square. We must show that 𝒮𝒢 (n) = 0. By
induction we are concerned only with the square-free divisor components, and we
show that the number of such divisors is odd.
Indeed, if we assume that j in (14.2) is the number of distinct prime factors, then
there is one missing term (jj). Namely, the divisor composed of all square-free factors
Game values of arithmetic functions | 275
must be counted whenever n contains a square. Apart from this, no new square-free
divisor is introduced. Thus the number of such components is odd, and since by in-
duction they have nim-value 1, we have 𝒮𝒢 (n) = 0.
We have investigated a few more of the fullset games, including those in the sub-
class “subtraction”, but not yet found other examples with sufficient regularity to
prove basic correspondence with number theory. Apart from fullset maliquot, this
class for now remains a mystery. For example, for fullset totient, the sequence
starts 0, 1, 0, 1, 1, 0, 0, 0, 0, 1, 1, 1, 1, 0, 1, 1, 0, 0, 0. The heap of size one has nim-value zero
by definition, and the heap of size two has nim-value one, because one is relatively
prime with two. 𝒮𝒢 (3) = 0, because the option is 1 + 2 of nim-value 0 ⊕ 1 = 1. The
sequence of the indices of the ones is 2, 4, 5, 10, 11, 12, 13, 15, and so on. This sequence
does not yet appear in OEIS.
7 Powerset games
We study six versions of the powerset games on arithmetic functions, and we begin by
listing the first 20 nim-values for the respective ruleset. All start at a heap of size one,
except item 2, which starts at the empty heap (defined as terminal).
1. powerset maliquot: move-to an element in the powerset of the proper divisors.
0, 1, 1, 2, 1, 2, 1, 4, 2, 2, 1, 4, 1, 2, 2, 8, 1, 4, 1.
0, 1, 2, 1, 4, 1, 2, 1, 8, 1, 2, 1, 4, 1, 2, 1, 16, 1, 2, 1.
For single heaps, these games tend to have nim-values powers of two. The intuition of
this is that by induction there is plenty opportunity, in a powerset, to construct any
number between the powers of two by using various sums of single heaps. We will
study the precise behavior in a couple of instances, namely items 2, 3, and 5.
Proof. A heap of size zero has nim-value 0 because it is terminal by definition. The
heap of size one has nim-value 1 = 20 because 1 | 1. The heap of size two has nim-
value 2 = 21 because 1, 2 | 2, and 𝒮𝒢 (2 − 1) = 1, 𝒮𝒢 (2 − 2) = 0. Both these cases satisfy
the largest power of two divisor criterion.
Suppose the statement holds for all numbers smaller than the heap size n = 2p a
with a odd, say. We must show that all nim-values less than 2p exist among the options
of n. For each q < p, we will find a number 0 ≤ m < n with 2q largest power of 2
divisor of m, and where n − m is a divisor of n. For example, if m = n − 2q a ∈ ℕ, then
n − m = 2q a | n, and m = 2q a(2p−q − 1) has the greatest power of two divisor 2q . By
induction, 𝒮𝒢 (m) = 2q . Let q range between 0 and p − 1. By the rules of powerset and
by using the disjunctive sum operator, it suffices to establish that all nim-values less
than 2p exist among the options of n.
Next, we must prove that the nim-value 2p does not exist among the options. It suf-
fices to show that no individual heap in an option, which is a disjunctive sum, is of the
same form as n. This follows by applying nim-sum, since, by induction, all numbers
smaller than n have nim-values powers of two.
A divisor of n is of the form 2q y, where y | a is odd, and where q ≤ p.
Suppose first q = p. Then n − 2p y = 2p y(a/y − 1); but a/y is odd, and hence a/y − 1
is even, so n − 2p y = 2z b with z > p and b odd, unless a = y when n − 2p y = 0.
In case q < p, we get n − 2q y = 2q y(2−q n/y − 1), and since 2−q n/y − 1 is odd, by
induction, the heap is not of the same form (since q < p).
Recall the indexing function io of largest odd divisor, concerning the singleton
version of maliquant. It applies here as well with some initial modification; although
it looks like one could “peel” off the 2s, it does not work due to the irregular set of
initial nim-values.
Theorem 16. Consider powerset maliquant. The sequence starts at a heap of size
one, and the first eight nim-values are 0, 0, 1, 0, 2, 1, 4, 8. Otherwise, if n = 2k + 1, k ≥ 4,
then 𝒮𝒢 (n) = 2k , and if n ≥ 10 is even, then 𝒮𝒢 (n) = 𝒮𝒢 (n/2).
Proof. The smaller heaps are easy to justify by hand. The heap of size 8 is pivotal. It
achieves nim-value 0 by the option 3 + 6, both numbers being nondivisors. The nim-
values 1, 2, 4 may be combined freely by using the nondivisor heaps 5, 6, 7. Hence the
nim-values of the small heaps are verified.
Game values of arithmetic functions | 277
For the base cases, we consider the heaps of sizes 9 and 10, of nim-values 16 = 24 ,
with 2 ⋅ 4 + 1 = 9 and 2 = 𝒮𝒢 (5), respectively.
For the induction, let us start with a heap of even size, say n = 4t + 2. It suffices to
show that 𝒮𝒢 (n) = 𝒮𝒢 (n/2). Observe that each number between n/2 and n is a nondi-
visor to n and hence may be part of a disjunctive sum to build desirable nim-values by
induction. Since n/2 = 2t+1 is odd, each power of two 24 , . . . , 2t appears among the nim-
values for heap sizes in [9, n/2]. By induction each power of two 20 , . . . , 2t−1 appears as
a nim-value in the heap interval I = [n/2−1, . . . , n−1]. Namely, for y ∈ [0, t −1], multiply
2y + 1 by 2 iteratively until 2s (2y + 1) ∈ I. Thus all numbers smaller than 2t appear as
options, but note that the nim-value 2t appears only as a nim-value for a divisor of n,
and hence this is the minimal exclusive. This proves that 𝒮𝒢 (n) = 2t if n is even, as
desired.
Now consider odd n = 2t + 1, say. By induction the nim-value of each heap smaller
than n is less than 2t . In case of t even, the powers of two 2t/2 , . . . , 2t−1 appear for nim-
values of odd heaps in the interval [t, . . . , 2t − 1]. Similarly to the case of even n, the
smaller power of two nim-values can also be found in this interval. Therefore each
nim-value smaller than 2t appears as an option of a disjunctive sum of nondivisors of
n. Hence the minimal exclusive is 𝒮𝒢 (n) = 2t .
Recall the function ip , the index of the smallest prime divisor of n, where the prime
2 has index 1, for the solution of totative from Section 2.3. It applies for the powerset
game as well.
Proof. The nim-value of a heap of size one is 0, since it is terminal. A heap of size
two has a move to the heap of size one, because 1 is relatively prime with all numbers
greater than 1. Hence 𝒮𝒢 (2) = 1 = 20 . Suppose the statement holds for all numbers
smaller than n > 1.
If n is even, then we must prove that there is a move to nim-value 0 but no move
to nim-value 1. The first part is done in the first paragraph. Hence let us show by in-
duction that there is no move to nim-value 1. Since all smaller heaps of odd size have
even nim-values, a disjunctive sum of nim-value 1 must contain a heap of even size.
This is impossible, since heaps of even size are not relatively prime with n.
Suppose that n is odd, so that the index of the smallest prime divisor of n is i > 1.
We must show that 𝒮𝒢 (n) = 2i−1 . By induction each smaller prime divisor with index
q < i, say, has appeared in a heap size smaller than n, with nim-value 2q−1 . Since
any disjunctive sum of heap sizes relatively prime with n is allowed as an option, by
induction each nim-value smaller than 2i−1 can be obtained.
Next, we show that there is no option of nim-value 2i−1 . This generalizes the idea
used in the second paragraph. A disjunctive sum of nim-value 2i−1 must contain a com-
ponent of nim-value 2i−1 . However, by induction those heap sizes are not relatively
prime with n.
278 | D. E. Iannucci and U. Larsson
n opt(n) 𝒮𝒢(n)
1 ⌀ 0
2 1 1
3 1 1
4 1, 2, 3 2
5 1 1
6 1, 2, 3, 4, 5, 6 ∞3
7 1 1
8 1, 2, 3, 4, 5, 6, 7 ∞3
9 1, 3, 4 3
7 Fraenkel et al. have developed a generalized Sprague–Grundy function for cyclic games on a finite
number of positions.
Game values of arithmetic functions | 279
Here ∞3 means the nim-value 3, but with an additional option an infinity, namely ∞3 .
Consider, for example, the disjunctive sum of heaps 6 + 9. Then every move apart from
playing to ∞3 is losing. So this game is a draw. However, playing instead 6 + 7, the
first player wins by moving to 2 + 7, 3 + 7, or 5 + 7, that is, a loopy game component
is sensitive to the disjunctive sum. The ω game also seems to have an interesting sum
variation.
Wythoff partizan subtraction [6] studies a partizan so-called complementary sub-
traction game. The players move options are conveniently represented by one single
sequence of natural numbers by letting one of the players subtract integers from the
sequence, whereas the other player subtracts positive integers that do not appear in
the sequence (provided that the heap remains nonnegative). Following that idea, here
any arithmetic function defines a partizan move-to or subtraction game by letting one
of the players play numbers from the arithmetic function, whereas the other plays
numbers from its negation. The partizan game values (canonical forms) of such games
remain big open problems.
Bibliography
[1] E. R. Berlekamp, J. H. Conway, R. K. Guy. Winning Ways, Academic Press, London, 1982.
[2] C. Bouton, Nim, a game with a complete mathematical theory, Annals of Math., 2nd Ser. 3
(1901–2), 35–39.
[3] A. Dailly, E. Duchene, U. Larsson, G. Paris, Partition games, Discrete Appl. Math. 285 (2020),
509–525.
[4] P. Grundy, Mathematics and games, Eureka 2 (1939), 6–8.
[5] G. H. Hardy and E. M. Wright, An Introduction to the Theory of Numbers (5th ed.), The Clarendon
Press, Oxford University Press, New York, 1979.
[6] N. Mc Kay, U. Larsson, R. J. Nowakowski, A. Siegel, Wythoff partizan subtraction, Int. J. Game
Theory 47 (2018), 613–652. Special Issue on Combinatorial Games, 2018. Invited paper from
Combinatorial Game Theory Colloquium, Lisbon, 2015.
[7] H. Shapiro, An arithmetic function arising from the Φ function, Am. Math. Mon. 50:1 (1943),
18–30.
[8] N. Sloane, The On-Line Encyclopedia of Integer Sequences (OEIS), website at http://oeis.org/.
[9] R. Sprague, Über mathematische Kampfspiele, Tôhoku J. Math. 41 (1936), 438–444.
Yuki Irie
A base-p Sprague–Grundy-type theorem for
p-calm subtraction games: Welter’s game and
representations of generalized symmetric
groups
Abstract: For impartial games Γ and Γ′ , the Sprague–Grundy function of the disjunc-
tive sum Γ + Γ′ is equal to the Nim-sum of their Sprague–Grundy functions. In this
paper, we introduce p-calm subtraction games and show that for p-calm subtraction
games Γ and Γ′ , the Sprague–Grundy function of a p-saturation of Γ + Γ′ is equal to the
p-Nim-sum of the Sprague–Grundy functions of their p-saturations. Here a p-Nim-sum
is the result of addition without carrying in base p, and a p-saturation of Γ is an impar-
tial game obtained from Γ by adding some moves. It will turn out that Nim and Welter’s
game are p-calm. Further, using the p-calmness of Welter’s game, we generalize a re-
lation between Welter’s game and representations of symmetric groups to disjunctive
sums of Welter’s games and representations of generalized symmetric groups; this re-
sult is described combinatorially in terms of Young diagrams.
1 Introduction
Base 2 plays a key role in combinatorial game theory. Specifically, the Sprague–Grundy
function of the disjunctive sum of two impartial games is equal to the Nim-sum of their
Sprague–Grundy functions. Here a Nim-sum is the result of addition without carrying
in base 2. In particular, the Sprague–Grundy value of a position in Nim equals the Nim-
sum of the heap sizes. It is rare that the Sprague–Grundy function of an impartial game
can be written explicitly like Nim. Another well-known example is Welter’s game, a
subtraction game like Nim; Welter [21] gave an explicit formula for its Sprague–Grundy
function by using the binary numeral system (see Theorem 2.8).
A few games related to base p have also been found, where p is an integer greater
than 1, not necessarily prime. For example, Flanigan found a game, called Rimp ;
the Sprague–Grundy value of a position in Rimp equals the p-Nim-sum of the heap
Acknowledgement: This work was supported by JSPS KAKENHI Grant Number JP20K14277.
The author would like to thank the anonymous referee for carefully reading the paper and for valuable
comments.
Yuki Irie, Research Alliance Center for Mathematical Sciences, Tohoku University, Miyagi, Japan,
e-mail: yirie@tohoku.ac.jp
https://doi.org/10.1515/9783110755411-015
282 | Y. Irie
sizes, where a p-Nim-sum is the result of addition without carrying in base p.1 We use
⊕p for the p-Nim-sum.2 For example, consider the heap (3, 7, 4). Whereas in Nim the
Sprague–Grundy value of (3, 7, 4) is equal to
3 ⊕2 7 ⊕2 4 = (1 + 2) ⊕2 (1 + 2 + 4) ⊕2 (4) = 0,
3 ⊕3 7 ⊕3 4 = (3) ⊕3 (1 + 2 ⋅ 3) ⊕3 (1 + 3) = 2 + 3 = 5.
Thus we can say that Rimp is a base-p version of Nim. Irie [6] observed that there
are infinitely many base-p versions of Nim. From this observation he introduced
p-saturations and showed that a p-saturation of Nim is a base-p version of Nim, that
is, the Sprague–Grundy value of a position in a p-saturation of Nim equals the p-Nim-
sum of the heap sizes. Figure 15.1 shows an example of a 3-saturation of Nim. Although
we can take tokens from just one heap in Nim, it is allowed to take tokens from mul-
tiple heaps with a restriction in a p-saturation of Nim. Incidentally, Rimp is one of
the p-saturations of Nim, and p-saturations are defined for subtraction games (see
Section 3 for details).
Further, it was shown that a p-saturation of Welter’s game is a base-p version of Wel-
ter’s game [6]. In other words, we can obtain an explicit formula for the Sprague–
Grundy function of a p-saturation of Welter’s game by rewriting Welter’s formula with
base p (see Theorem 3.6).
In this paper an impartial game is defined to be a digraph such that the maximum
length of a walk from each vertex is finite. We will recall the basics of impartial games
in Section 2. Let Γ1 and Γ2 be two impartial games. Then the Sprague–Grundy function
of the disjunctive sum Γ1 + Γ2 is equal to the Nim-sum of their Sprague–Grundy func-
tions. The fundamental question of this paper is whether there exists an operation +p
such that the Sprague–Grundy function of Γ1 +p Γ2 is equal to the p-Nim-sum of their
1 Rimp was devised by James A. Flanigan in an unpublished paper entitled “NIM, TRIM and RIM.”
2 The operation ⊕p is different from that related to Moore’s Nimk−1 [10] and Li’s k-person Nim [8].
These games were analyzed using addition modulo k in base 2.
A base-p Sprague–Grundy-type theorem for p-calm subtraction games | 283
3 Although the p′ -component theorem holds only when p is prime, a slightly weaker result holds even
when p is not prime. By using this we can obtain an explicit formula for the Sprague–Grundy function
of a p-saturation of Welter’s game.
284 | Y. Irie
2 Subtraction games
This section provides the basics of impartial games. We define subtraction games, dis-
junctive sums, and Sprague–Grundy functions. See [1, 3] for more details of combina-
torial game theory.
We first introduce some notation for impartial games. Let Γ be a digraph with ver-
tex set 𝒫Γ and edge set ℰΓ , that is, 𝒫Γ is a set, and ℰΓ ⊆ 𝒫Γ2 . As we have defined in the
introduction, the digraph Γ is called a (short) impartial game if the maximum length
lgΓ (A) of a walk from each vertex A is finite. Let Γ be an impartial game. The vertex set
𝒫Γ is called its position set. Let A and B be two positions in Γ. If (A, B) ∈ ℰΓ , then B is
called an option of A. If there exists a path from A to B, then B is called a descendant
of A. A descendant B of A is said to be proper if B ≠ A.
Example 2.1. The digraph with vertex set { 1, 2, 3 } and edge set { (1, 2), (2, 3), (1, 3) } is an
impartial game. However, the digraph with vertex set { 1, 2 } and edge set { (1, 2), (2, 1) }
is not an impartial game since it has the walk (1, 2, 1, 2, . . .) of infinite length.
Remark 2.2. Let Γ be an impartial game with at least one position. We can consider
Γ as the following two-player game. Before the game, we put a token on a starting
position A ∈ 𝒫Γ . The first player moves the token from A to its option B. Similarly,
the second player moves the token from B to its option C. In this way, the two play-
ers alternately move the token. The winner is the player who moves the token last.
For example, let Γ be the impartial game with position set { 1, 2, 3, 4 } and edge set
{ (1, 2), (2, 3), (1, 4) }, and start at position 1. The first player can move the token to ei-
ther position 2 or 4. If she moves it to position 2, then the second player moves it to 3
and wins. Thus she should move the token to 4.
We now define subtraction games. Let ℕ be the set of nonnegative integers. Ele-
ments in ℕm will be denoted by upper-case letters, and components of them by lower-
case letters with superscripts. For example, A = (a1 , . . . , am ) ∈ ℕm . Let 𝒫 ⊆ ℕm and
C ⊆ ℕm \ { (0, . . . , 0) }. Define Γ(𝒫 , C ) to be the impartial game with position set 𝒫 and
edge set
{ (A, B) ∈ 𝒫 2 : A − B ∈ C } .
1 m
Cm = { C ∈ ℕ : wt(C) = 1 } ,
A base-p Sprague–Grundy-type theorem for p-calm subtraction games | 285
where wt(C) is the Hamming weight of C, that is, the number of nonzero components
of C. The subtraction game Γ(ℕm , Cm1 ) is called Nim and is denoted by 𝒩m . For exam-
ple, in 𝒩2 the options of (1, 2) are (0, 2), (1, 1), and (1, 0).
Example 2.4. Let ℳm = Γ(ℕm \ { (0, . . . , 0) } , Cm1 ). The winner in ℳm is the loser in
𝒩m , and ℳm is called misère Nim.
m i j
𝒫 = { A ∈ ℕ : a ≠ a for 1 ≤ i < j ≤ m } .
The subtraction game Γ(𝒫 , Cm1 ) is called Welter’s game and is denoted by 𝒲m . For ex-
ample, in 𝒲2 the options of (1, 2) are (0, 2) and (1, 0). Note that
m m
m
lg𝒲m (A) = ∑ ai − (1 + 2 + ⋅ ⋅ ⋅ + m − 1) = ∑ ai − ( ). (15.1)
i=1 i=1
2
((1, 3, 4), (0, 3, 4), (0, 2, 4), (0, 1, 4), (0, 1, 3), (0, 1, 2))
For example,
𝒩m = ⏟⏟⏟⏟⏟⏟⏟⏟⏟⏟⏟⏟⏟⏟⏟⏟⏟⏟⏟⏟⏟⏟⏟⏟⏟
𝒩1 + ⋅ ⋅ ⋅ + 𝒩1 .
m
Note that the disjunctive sum of two subtraction games is again a subtraction game.
i
Indeed, if 𝒫 i ⊆ ℕm and Γi = Γ(𝒫 i , C i ) for i ∈ { 1, 2 }, then Γ1 + Γ2 = Γ(𝒫 1 × 𝒫 2 , C ), where
m2
{
{ 1,1 }
1,m1 ⏞⏞⏞⏞⏞⏞⏞⏞⏞⏞⏞⏞⏞ 1,1 1,m1 1}
C = { (c , . . . , c , 0, . . . , 0) : (c , . . . , c ) ∈ C }
{ }
{ }
2 2
∪ { (0, . . . , 0, c2,1 , . . . , c2,m ) : (c2,1 , . . . , c2,m ) ∈ C 2 } .
⏟⏟⏟⏟⏟⏟⏟⏟⏟⏟⏟⏟⏟
m1
286 | Y. Irie
max { L ∈ ℕ : 2L | a } if a ≠ 0,
ord2 (a) = {
∞ if a = 0.
Example 2.9. Let A be the position (7, 5, 3) in 𝒲3 . By Theorem 2.8 we see that
3.1 Notation
Fix an integer p greater than 1, and let Ω = { 0, 1, . . . , p − 1 }. For a, L ∈ ℕ, let aL denote
the Lth digit in the p-adic expansion of a. Then
a = ∑ aL pL , aL ∈ Ω.
L∈ℕ
(p)
∑ ai − bi ≡ ⨁p ai ⊖p bi ≡ [0, . . . , 0, ⨁ aiN ⊖ biN ] (mod pN+1 ). (15.2)
i i i
3.2 p-saturations
We define p-saturations. For a ∈ ℕ, let ordp (a) denote the p-adic order of a, that is,
max { L ∈ ℕ : pL | a } if a ≠ 0,
ordp (a) = {
∞ if a = 0.
For example, ord2 (12) = ord2 ([0, 0, 1, 1](2) ) = 2 and ord3 (12) = ord3 ([0, 1, 1](3) ) = 1.
Define
(p) m i
Cm = { C ∈ ℕ \ { (0, . . . , 0) } : ordp (∑ c ) = mordp (C) } ,
i
288 | Y. Irie
because
Note that if Γ satisfies (∗), then Γ̃ is a p-saturation of Γ if and only if sgΓ̃ = sgΓ(𝒫,C (p) ) .
m
Moreover, if two subtraction games Γ1 and Γ2 satisfy (*), then so does Γ1 + Γ2 .
It is known that we can obtain base-p versions of some games by using p-satura-
tions.
Example 3.2 ([6]). Let Γ̃ = Γ(ℕ2 , C2(3) ). Table 15.1 shows the Sprague–Grundy values of
some positions in Γ.̃ It is easy to see that sgΓ̃ (a, 0) = sgΓ̃ (0, a) = a. Since (1, 1) ∈ C2(3) , it
follows that (0, 0) is an option of (1, 1). Thus sgΓ̃ (1, 1) = 2. We also see that sgΓ̃ (1, 2) = 0
because (0, 0) is not an option of (1, 2).
0 1 2 3
0 0 1 2 3
1 1 2 0 4
2 2 0 1 5
3 3 4 5 6
Remark 3.3. Note that 𝒩m is a 2-saturation of itself. This means that adding an edge
(A, B) with A − B ∈ Cm(2) to 𝒩m does not change its Sprague–Grundy function. Inciden-
tally, it is known that Γ(ℕm , C ) is a 2-saturation of 𝒩m if and only if Cm1 ⊆ C ⊆ Cm(2) [2].
where N = mordp (A − B). Indeed, since ai ≡ bi (mod pN ), it follows from Lemma 3.1
that
(p)
∑ ai − bi ≡ [0, . . . , 0, ⨁ aiN ⊖ biN ] (mod pN+1 ).
i i
Theorem 3.5 ([7]). Let Γ̃ be a p-saturation of misère Nim ℳm . If A is a position in Γ,̃ then
Example 3.7. Let Γ̃ be a 5-saturation of 𝒲3 , and let A be the position (7, 5, 3) in Γ.̃ It
follows from Theorem 3.6 that
Lemma 3.8. Let Γ1 be a subtraction game Γ(𝒫 , C ) with C ⊆ Cm(p) , let Γ = Γ1 + 𝒩1 , and let
Γ̃ be a p-saturation of Γ. Suppose that
where α = sgΓ̃1 (A) and β = sgΓ̃1 (B). Then α ≠ β because if α = β, then by Lemma 3.1
(p)
[0, . . . , 0, ⨁ aiN ⊖ biN ] ≡ ∑ ai − bi ≢ α − β ≡ 0 (mod pN+1 ),
i i
We show that (B, 0) is an option of (A, β ⊖p α) in Γ,̃ which will imply that (15.8) does not
hold. Let M = ordp (β ⊖p α).
((∑ ai − bi ) + (β ⊖p α − 0)) = βM ⊖ αM ≠ 0.
i M
By Lemma 3.1
(p)
∑ ai − bi ≡ [0, . . . , 0, ⨁ aiN ⊖ biN ] (mod pN+1 ) (15.11)
i i
and
αN ⊖ βN ≠ ⨁ aiN ⊖ biN .
i
Hence
This implies that (B, 0) is an option of (A, β ⊖p α) in Γ.̃ Therefore (15.8) does not hold.
Let Γ be a subtraction game Γ(𝒫 , C ) with C ⊆ Cm(p) . The game Γ is said to be p-calm
if it satisfies (15.9), that is, for every position A and every proper descendant B of A,
⨁p ai ≡ ⨁p bi (mod pN ),
i i
where N = mordp (A − B). It follows from Example 3.2 and Lemma 3.1 that
≡ (⨁p a ) ⊖p (⨁p bi ) ≡ ∑ ai − bi
i
(mod pN+1 ).
i i i
Thus 𝒩m is p-calm. We can also easily show that ℳm and 𝒲m are p-calm (see Sec-
tion 3.5).
Remark 3.10. There exist non-p-calm subtraction games. For example, let Γ be the
subtraction game Γ({ 0, p } , C11 ). It is clear that Γ is a p-saturation of itself. Since
sgΓ (0) = sgΓ ([0](p) ) = 0 and sgΓ (p) = sgΓ ([0, 1](p) ) = 1, it follows that Γ is not p-calm.
292 | Y. Irie
Theorem 3.11. For i ∈ { 1, . . . , k }, let Γi be a p-calm subtraction game. Then the disjunc-
tive sum Γ1 + ⋅ ⋅ ⋅ + Γk is p-calm. Moreover, if Γ̃ is a p-saturation of Γ1 + ⋅ ⋅ ⋅ + Γk and A is a
position in Γ,̃ then
Lemma 3.12. Let Γ be a p-calm subtraction game, and let ϕ be the Sprague–Grundy
function of its p-saturation. If A is a position in Γ and B is its proper descendant, then
(p)
ϕ(A) ⊖p ϕ(B) ≡ ϕ(A) − ϕ(B) ≡ [0, . . . , 0, ⨁ aiN ⊖ biN ] (mod pN+1 ), (15.13)
i
(p)
∑ ai − bi ≡ ⨁p ai ⊖p bi ≡ [0, . . . , 0, ⨁ aiN ⊖ biN ] (mod pN+1 ).
i i i
we see that ϕ(A) ≡ ϕ(B) (mod pN ). It follows from Lemma 3.1 that
ϕ1 (A1 ) ⊕p ⋅ ⋅ ⋅ ⊕p ϕk (Ak ). To prove that sgΓ̃ (A) = ϕ(A), it suffices to show the following
two statements.
(SG1) If B is an option of A in Γ,̃ then ϕ(B) ≠ ϕ(A).
(SG2) If 0 ≤ β < ϕ(A), then ϕ(B) = β for some option B of A in Γ.̃
Moreover, since ϕi (Ai ) ≡ ϕi (Bi ) (mod pN ), we see that ϕ(A) ≡ ϕ(B) (mod pN ). By
Lemma 3.1 and (15.15)
i,j i,j
⨁ aN ⊖ bN ≠ 0.
i,j
1. β i ≤ αi .
2. ordp (∑i αi − βi ) = min { ordp (αi − βi ) : 1 ≤ i ≤ k }.
3. β1 ⊕p ⋅ ⋅ ⋅ ⊕p βk = β.
If βi = αi , then let Bi = Ai . If βi < αi , then since αi = ϕi (Ai ) = sgΓ̃i (Ai ), we see that Ai has
an option Bi such that ϕi (Bi ) = βi in Γ̃ i . Let B = (B1 , . . . , Bk ). Then ϕ(B) = β1 ⊕p ⋅ ⋅ ⋅ ⊕p
βk = β.
We prove that B is an option of A. By (15.14) it suffices to show that βN ≠ αN , where
N = mordp (A − B). Let N i = mordp (Ai − Bi ). We first show that
= ordp (∑ αi − βi ). (15.17)
i
(∑ αi − βi ) = ⨁ αNi ⊖ βNi = αN ⊖ βN .
i N i
By (15.17) we see that αN ≠ βN , and so B is an option of A in Γ.̃ Therefore ϕ(A) = sgΓ̃ (A).
In particular, if A is a position and B is its proper descendant, then by (15.14)
sgΓ̃ (A) − sgΓ̃ (B) = ϕ(A) − ϕ(B) ≡ ∑ ai,j − bi,j (mod pN+1 ),
i,j
Corollary 3.13. If Γ is a subtraction game Γ(𝒫 , C ) with C ⊆ Cm(p) , then Γ and 𝒩1 satisfy
(PN) if and only if Γ is p-calm.
p-saturation. Suppose that there exists a position A2 ∈ 𝒫Γ̃2 such that sgΓ̃2 (A2 ) = α for
every α ∈ ℕ. Then Γ1 and Γ2 satisfy (PN) if and only if Γ1 is p-calm. We can prove this
by the same argument as in the proof of Lemma 3.8, so we only sketch it. Suppose that
Γ1 is not p-calm. Then there exist a position A1 and its proper descendant B1 such that
Suppose that M < N. Then mordp (A − B) = M. Since (∑ ai,j − bi,j )M = (∑ a2,j − b2,j )M ≠ 0,
it follows that B is an option of A. Suppose that M ≥ N. Then mordp (A − B) = N. It
1,j 1,j
follows from (15.18) and (15.19) that αN ⊖ βN ≠ ⨁j aN ⊖ bN . Hence
1,j 1,j
(∑ ai,j − bi,j ) = (⨁ aN ⊖ bN ) ⊕ βN ⊖ αN ≠ 0.
i,j N j
i j
ψ(p) (A) = a1 ⊕p ⋅ ⋅ ⋅ ⊕p am ⊕p (⨁p pordp (a −a )+1 − 1). (15.20)
i<j
where A = (A1 , . . . , Ak ).
296 | Y. Irie
Proof. Let A be a position in 𝒲m , and let B be its proper descendant. Set N = mordp (A−
B). Then ai ≡ bi (mod pN ). In particular, ai − aj ≡ bi − bj (mod pN ). Since
p−1 if h ≡ 0 (mod pL ),
(pordp (h)+1 − 1)L = {
0 if h ≢ 0 (mod pL ),
i j i j
we see that (pordp (a −a )+1 − 1)L = (pordp (b −b )+1 − 1)L for 0 ≤ L ≤ N. Thus
i j i j
pordp (a −a )+1 − 1 ≡ pordp (b −b )+1 − 1 (mod pN+1 ).
Example 3.16. Let Γ be a 5-saturation of 𝒲3 + 𝒲1 , and let A be the position ((7, 5, 3), (3))
in Γ. By Theorem 3.15
Remark 3.17. We can show the p-calmness of ℳm similarly. Indeed, let A be a position
in ℳm , and let B be a proper descendant of A. Since
we see that pmordp (A)+1 − 1 ≡ pmordp (B)+1 − 1 (mod pN+1 ), where N = mordp (A − B). It
follows from Theorem 3.5 that ℳm is p-calm.
Proposition 3.18 ([6]). Every position A in Welter’s game 𝒲m has a descendant B such
that lg𝒲m (B) = ψ(p) (B) = ψ(p) (A).
Proposition 3.18 says that A has a descendant B such that lg𝒲3 (B) = ψ(2) (B) = 7. In-
deed, if B = (5, 3, 2), then
3
lg𝒲3 (B) = 5 + 3 + 2 − ( ) = 7,
2
and
If sgΓ (A) = lgΓ (A), then A is said to be full. Consider the following condition on an
impartial game Γ:
(FD) Every position A in Γ has a full descendant B with the same Sprague–Grundy
value as A.
It is easy to show that a p-saturation of Nim satisfies condition (FD). It follows from
Proposition 3.18 and Theorem 3.6 that a p-saturation of Welter’s game also satis-
fies (FD). Our aim is to prove that a p-saturation of disjunctive sums of Welter’s games
satisfies (FD). To this end, we show two lemmas.
Lemma 3.21. Let A be a full position in an impartial game Γ. If lgΓ (A) > 0, then A has a
full option B with lgΓ (B) = lgΓ (A) − 1. In particular, if 0 ≤ β ≤ lgΓ (A) − 1, then A has a full
descendant B with lgΓ (B) = β.
Proof. Since A is full, it follows that sgΓ (A) = lgΓ (A) > 0. Hence A has an option B
with sgΓ (B) = sgΓ (A) − 1 = lgΓ (A) − 1. Since sgΓ (B) ≤ lgΓ (B) ≤ lgΓ (A) − 1, we see that
sgΓ (B) = lgΓ (B) = lgΓ (A) − 1. Thus B is the desired option of A.
Lemma 3.22. For i ∈ { 1, . . . , k }, let Γi be an impartial game satisfying (FD), and let Γ be
an impartial game with position set 𝒫Γ1 × ⋅ ⋅ ⋅ × 𝒫Γk . If for A = (A1 , . . . , Ak ) ∈ 𝒫Γ ,
and
Proof. Let A be a position (A1 , . . . , Ak ) in Γ, and let αi = sgΓi (Ai ) and α = sgΓ (A) =
α1 ⊕p ⋅ ⋅ ⋅ ⊕p αk . By Lemma 3.21 it suffices to show that A has a full descendant B such
that sgΓ (B) ≥ α. Let
Then β ≥ α. We will show that A has a full descendant B such that sgΓ (B) = β.
We first show that there exist β1 , . . . , βk ∈ ℕ satisfying the following two condi-
tions:
1. βi ≤ αi .
2. β1 + ⋅ ⋅ ⋅ + βk = β1 ⊕p ⋅ ⋅ ⋅ ⊕p βk = β.
i i 1 k
βM ≤ αM and βM + ⋅ ⋅ ⋅ + βM = p − 1 = βM .
Since Γi satisfies (FD), it follows from Lemma 3.21 that Ai has a full descendant Bi
such that sgΓi (Bi ) = βi . Let B = (B1 , . . . , Bk ). Then
sgΓ (B) = β1 ⊕p ⋅ ⋅ ⋅ ⊕p βk = β.
Since
Example 3.24. Let Γ = 𝒲3 + 𝒲2 , and let A be the position ((6, 5, 2), (3, 1)) in Γ. Note that
Γ is a 2-saturation of itself and
By Proposition 3.23 the position A has a full descendant B such that lgΓ (B) =
ψ(2) (B) = 7. Indeed, if B = ((4, 3, 1), (3, 0)), then
and
{ (i, j) ∈ ℕ2 : 1 ≤ i ≤ m, 1 ≤ j ≤ λi }
is called the Young diagram or the Ferrers diagram corresponding to λ. We will identify
a partition with its Young diagram. A Young diagram Y can be visualized by using |Y|
cells. For example, Figure 15.2 shows the Young diagram (5, 4, 3).
where σ is a permutation with aσ(1) > aσ(2) > ⋅ ⋅ ⋅ > aσ(m) . We consider Y(A) as a Young
diagram by ignoring zeros in Y(A). For example, if A = (3, 7, 5) and A′ = (5, 9, 7, 1, 0),
300 | Y. Irie
then
so Y(A′ ) = Y(A) = . Note that the number of cells in Y(A) is equal to lg𝒲m (A)
since
In other words, Hi,j (Y) consists of the cells to the right of (i, j), the cells below (i, j), and
(i, j) itself. For example, Figure 15.3 shows the (1, 2)-hook of (5, 4, 3).
We now describe removing a hook. We first remove cells in Hij (Y) from Y. If we get two
diagrams, then we next push them together. The obtained Young diagram is denoted
by Y \ Hij (Y) and is said to be obtained from Y by removing the (i, j)-hook. For example,
if Y = (5, 4, 3), then Y \ H1,2 (Y) = (3, 2, 1), and Y \ H3,2 (Y) = (5, 4, 1) (see Figure 15.4).
Figure 15.4: Removing the (1, 2)-hook and the (3, 2)-hook.
For example, consider moving from (7, 5, 3) to (1, 5, 3) in 𝒲3 . Since 7 is the largest
element in { 7, 5, 3 } and 1 is the second smallest element in ℕ \ { 7, 5, 3 }, it follows
from (15.24) that
as we have seen in Figure 15.4. In this way, moving in Welter’s game corresponds to
removing a hook. Note that it is obvious that |Y(A)| = lg𝒲m (A).
The number of cells in the (i, j)-hook is called the hook-length of (i, j). Figure 15.5
shows the hook-lengths of the Young diagram (5, 4, 3). Let H (Y) be the multiset of
hook lengths of Y. For example, H (5, 4, 3) = { 1, 1, 1, 2, 3, 3, 3, 4, 5, 5, 6, 7 }.
ord (h) L
where Y = Y(A) and N(2) (h) = ∑L=02 2 = [ 1, . . . , 1 ](2) .
⏟⏟⏟⏟⏟⏟⏟⏟⏟⏟⏟
ord2 (h)+1
= N(2) (1) ⊕2 N(2) (1) ⊕2 N(2) (1) ⊕2 N(2) (3) ⊕2 N(2) (3) ⊕2 N(2) (3)
⊕2 N(2) (5) ⊕2 N(2) (5) ⊕2 N(2) (7)
⊕2 N(2) (2) ⊕2 N(2) (6) ⊕2 N(2) (4)
302 | Y. Irie
= 1 ⊕2 1 ⊕2 1 ⊕2 1 ⊕2 1 ⊕2 1 ⊕2 1 ⊕2 1 ⊕2 1 ⊕2 3 ⊕2 3 ⊕2 7
= 6.
ord (h)
where Y = Y(A) and N(p) (h) = ∑L=0p pL = [ ⏟⏟⏟⏟⏟⏟⏟⏟⏟⏟⏟
1, . . . , 1 ](p) .
ordp (h)+1
= N(5) (1) ⊕5 N(5) (1) ⊕5 N(5) (1) ⊕5 N(5) (2) ⊕5 N(5) (3)
⊕5 N(5) (3) ⊕5 N(5) (3) ⊕5 N(5) (4) ⊕5 N(5) (6) ⊕5 N(5) (7)
⊕5 N(5) (5) ⊕5 N(5) (5)
= N(5) (5) ⊕5 N(5) (5)
= 6 ⊕5 6
= 12.
Theorem 4.5 (Hook formula [4]). If Y is a Young diagram with n cells, then
n!
fY = .
∏h∈H (Y) h
3!
fY = = 2.
12 ⋅ 3
Indeed, there are exactly two standard tableaux of shape Y (see Figure 15.7).
Theorem 4.8 ([6]). Every Young diagram Y includes a Young diagram Z with ψ(p) (Y)
cells such that f Z is prime to p.
Example 4.9. Let p = 2, and let Y be the Young diagram . By Theorem 4.5
9!
fY = = 168.
13 ⋅ 2 ⋅ 32 ⋅ 4 ⋅ 5 ⋅ 6
Note that f Y is even. Now Y corresponds to the position (6, 4, 2) in 𝒲3 . Since ψ(2) (Y) =
sg𝒲3 (6, 4, 2) = 7, Theorem 4.8 says that Y includes a Young diagram Z with 7 cells such
that f Z is odd. Indeed, as we have seen in Example 3.19, the position (5, 3, 2) is a full
descendant of (6, 4, 2), and these two positions have the same Sprague–Grundy value.
Let Z = Y(5, 3, 2) = . Then Z ⊂ Y, and
Moreover,
7!
fZ = = 21.
12 ⋅ 22 ⋅ 3 ⋅ 4 ⋅ 5
In particular, f Z is odd.
Remark 4.10. Theorem 4.8 has the following algebraic interpretation. Let Y be a
Young diagram with n cells. Recall that the corresponding irreducible representation
ρY is a map from Sym(n) to GL(d, ℂ). Since Sym(n − 1) ⊆ Sym(n), we can obtain a rep-
resentation of Sym(n − 1) by restricting ρY to Sym(n − 1). The obtained representation
ρY |Sym(n−1) may not be irreducible and can be decomposed as follows:
ρY Sym(n−1) = ⨁ ρY ,
−
(15.26)
Y−
where the direct sum runs over all Young diagrams Y − obtained from Y by removing
a hook of length 1. For example,
From equation (15.26) and Theorem 4.8 we see that the restriction ρY to Sym(ψ(p) (Y))
has a component with degree prime to p. Thus we will call Theorem 4.8 the p′ -
component theorem. For example, if Y = (4, 3, 2), then
ρY |Sym(ψ(2) (Y)) = ρ
Sym(7) = (ρ Sym(8) )Sym(7)
⊕ρ ⊕ρ
= (ρ )Sym(7)
= (ρ ⊕ρ ) ⊕ (ρ ⊕ρ ) ⊕ (ρ ⊕ρ ⊕ρ ).
Since
deg ρ =f = 21,
Let f Y denote the number of standard tableaux of shape Y . It follows from Theorem 4.5
that
n! n1 ! nk ! n!
fY = ⋅ ⋅ ⋅ = ,
n1 ! ⋅ ⋅ ⋅ nk ! ∏h∈H (Y 1 ) h ∏h∈H (Y k ) h ∏h∈H (Y ) h
Theorem 4.11. Every k-tuple Y of Young diagrams includes a k-tuple Z of Young dia-
grams with ψ(p) (Y ) cells in total such that f Z is prime to p.
l l
Proof. Let Y = (Y 1 , . . . , Y k ), Y l = (λl,1 , . . . , λl,m ), Al = (λl,1 + ml − 1, λl,2 + ml − 2, . . . , λl,m ),
and A = (A1 , . . . , Ak ). Then Y = (Y(A1 ), . . . , Y(Ak )). We consider A as a position in Γ,̃
where Γ̃ is a p-saturation of 𝒲m1 +⋅ ⋅ ⋅+ 𝒲mk . By Proposition 3.23 the position A has a full
descendant B with the same Sprague–Grundy value as A, that is, lgΓ̃ (B) = ψ(p) (B) =
ψ(p) (A). Let Z = (Y(B1 ), . . . , Y(Bk )). We see that Z ⊆ Y . Moreover,
H (Y ) = { 1, 1, 1, 1, 2, 2, 2, 3, 3, 4, 5, 5, 6 }
13!
f( , )
= = 144144.
14 ⋅ 23 ⋅ 32 ⋅ 4 ⋅ 52 ⋅ 6
Moreover,
By Theorem 4.11, Y includes a pair Z of Young diagrams with 7 cells in total such that
f Z is odd. Indeed, let Z = ((2, 2, 1), (2)). We see that
7!
f( , )
= = 105.
13 ⋅ 22 ⋅ 3 ⋅ 4
Figure 15.9: The hook lengths of ((4, 4, 2), (2, 1)) and ((2, 2, 1), (2)).
ρY (ℤ/kℤ)≀Sym(n−1) = ⨁ ρY ,
−
(15.27)
Y−
where the direct sum runs over all k-tuples Y − of Young diagrams obtained from Y by
removing a hook of length 1. For example,
ρ( , )
(ℤ/2ℤ)≀Sym(12) = ρ
( , )
⊕ ρ( , )
⊕ ρ( , )
⊕ ρ( , )
.
Let Y and Z be two k-tuples of Young diagrams such that Z ⊆ Y . By (15.27) the rep-
resentation ρZ is a component of the restriction of ρY to (ℤ/kℤ) ≀ Sym(|Z|). Therefore
Theorem 4.11 says that the restriction ρY to (ℤ/kℤ) ≀ Sym(ψ(p) (Y )) has a component
with degree prime to p.
Bibliography
[1] E. R. Berlekamp, J. H. Conway, and R. K. Guy, Winning Ways for Your Mathematical Plays, vol. 1,
2nd ed., A. K. Peters, Natick, MA, 2001.
[2] U. Blass, A. S. Fraenkel, and R. Guelman, How far can Nim in disguise be stretched?, J. Combin.
Theory Ser. A 84(2) (1998), 145–156.
[3] J. H. Conway, On Numbers and Games, 2nd ed., A. K. Peters, Natick, MA, 2001.
[4] J. S. Frame, G. de B. Robinson, and R. M. Thrall, The hook graphs of the symmetric group,
Canad. J. Math. 6 (1954), 316–324.
[5] P. M. Grundy, Mathematics and games, Eureka 2 (1939), 6–8.
[6] Y. Irie, p-Saturations of Welter’s game and the irreducible representations of symmetric groups,
J. Algebraic Combin. 48 (2018), 247–287.
[7] Y. Irie. The Sprague-Grundy functions of saturations of misère Nim, Electron. J. Combin. 28(1)
(2021) #P1.58.
[8] S.-Y. R. Li, N-person Nim and N-person Moore’s games, Internat. J. Game Theory 7(1) (1978),
31–36.
[9] I. G. MacDonald, On the degrees of the irreducible representations of symmetric groups, Bull.
Lond. Math. Soc. 3(2) (1971), 189–192.
[10] E. H. Moore, A generalization of the game called Nim, Ann. of Math. 11(3) (1910), 93–94.
308 | Y. Irie
[11] G. Navarro, Character Theory and the McKay Conjecture, Cambridge Studies in Advanced
Mathematics, Cambridge University Press, Cambridge, 2018.
[12] J. B. Olsson, McKay numbers and heights of characters, Math. Scand. 38 (1976), 25–42.
[13] J. B. Olsson, Combinatorics and Representations of Finite Groups, Vorlesungen aus dem
Fachbereich Mathematik der Universität GH Essen 20, 1993.
[14] M. Osima, On the representations of the generalized symmetric group, Math. J. Okayama Univ.
4(1) (1954), 39–56.
[15] B. E. Sagan, The Symmetric Group: Representations, Combinatorial Algorithms, and Symmetric
Functions, Graduate Texts in Mathematics, 203, 2nd ed., Springer-Verlag, New York, NY, 2001.
[16] M. Sato, On a game (Notes by K. Ueno in Japanese), in Proceedings of the 12th Symposium of
the Algebra Section of the Mathematical Society of Japan (1968), 123–136.
[17] M. Sato, Mathematical theory of Maya game (Notes by H. Enomoto in Japanese), RIMS
Kôkyûroku 98 (1970), 105–135.
[18] M. Sato, On Maya game (Notes by H. Enomoto in Japanese), Sugaku no Ayumi 15(1) (1970),
73–84.
[19] R. P. Sprague, Über mathematische Kampfspiele, Tohoku Math. J. 41 (1935), 438–444.
[20] J. R. Stembridge, On the eigenvalues of representations of reflection groups and wreath
products, Pacific J. Math. 140(2) (1989), 353–396.
[21] C. P. Welter, The theory of a class of games on a sequence of squares, in terms of the advancing
operation in a special group, Indag. Math. (Proceedings) 57 (1954), 194–200.
Urban Larsson, Rebecca Milley, Richard Nowakowski,
Gabriel Renault, and Carlos Santos
Recursive comparison tests for dicot and
dead-ending games under misère play
Abstract: In partizan games, where players Left and Right may have different options,
there is a partial order defined as preference by Left: G ⩾ H if Left wins G + X when-
ever she wins H + X for any game position X. In normal play, there is an easy test for
comparison: G ⩾ H if and only if Left wins G − H playing second. In misère play, where
the last player to move loses, the same test does not apply—for one thing, there are
no additive inverses—and very few games are comparable. If we restrict the arbitrary
game X to a subset of games 𝒰 , then we may have G ⩾ H “modulo 𝒰 ”; but without the
easy test from normal play, we must give a general argument about the outcomes of
G + X and H + X for all X ∈ 𝒰 . In this paper, we use the novel theory of absolute com-
binatorial games to develop recursive comparison tests for the well-studied universes
of dicots and dead-ending games. This is the first constructive test for comparison of
dead-ending games under misère play using a new family of end-games called perfect
murders.
1 Introduction
The purpose of this paper is to develop recursive comparison tests for certain classes of
misère-play games. We assume that the reader is familiar with the theory of normal-
Acknowledgement: Urban Larsson was supported in part by the Aly Kaufman Fellowship.
Rebecca Milley was supported in part by the Natural Sciences and Engineering Research Council of
Canada.
Richard Nowakowski was supported in part by the Natural Sciences and Engineering Research Council
of Canada.
Gabriel Renault was supported by the ANR-14-CE25-0006 project of the French National Research
Agency.
Carlos Santos was partially funded by Fundação para a Ciência e a Tecnologia through the project
UID/MAT/04721/2013.
Urban Larsson, Ind. Engineering and Management, Technion, Israel Institute of Technology, Haifa,
Israel, e-mail: urban031@gmail.com
Rebecca Milley, Computational Mathematics, Grenfell Campus, Memorial University, Corner Brook,
Canada, e-mail: rmilley@grenfell.mun.ca
Richard Nowakowski, Gabriel Renault, Dept of Mathematics and Statistics, Dalhousie University,
Halifax, Canada, e-mails: rjn@mathstat.dal.ca, gabriel.renault@ens-lyon.org
Carlos Santos, Center for Functional Analysis, Linear Structures and Applications, University of
Lisbon, Lisbon, Portugal, e-mail: cmfsantos@fc.ul.pt
https://doi.org/10.1515/9783110755411-016
310 | U. Larsson et al.
play combinatorial games, including partizan game outcomes, disjunctive sum and
negation, and equality and inequality (see Section 2 for a brief review).
In combinatorial games, comparability is a critical relation. Domination and re-
versibility both rely on comparisons of options to simplify games. In normal play, it is
straightforward to check that two games G and H are comparable:
However, in misère play—where the first player unable to move is the winner in-
stead of the loser—this simple test does not apply. Games do not have additive inverses
under general misère play, so H +(−H) ≠ 0. In fact, no nonzero game is equal to zero in
misère play [11]; in normal play, all previous-win positions are zero. There is a modified
hand-tying principle1 for misère play, but nontrivial comparisons are rare in general
misère play [16].
If play is restricted to a specific set or universe of games 𝒰 , then G and H may be
comparable “modulo 𝒰 ”, even if they are not in general. This is restricted misère play
[14, 15]. However, without the easy test from normal play, finding instances of misère
comparison in restricted play requires a proof of a universal statement:
+ ⩾
In normal play, it is easy to see that this inequality is true; we simply show that Left
wins playing second on this sum (where negation is achieved by rotating a board by
90 degrees):
+ +
Intuitively, it seems that the same should be true in misère play (perhaps modulo
a suitable restricted universe): Right should be happier to play on the unbroken 2 × 3
1 In misère play, if the Left options of H are a nonempty subset of the Left options of G (or if both
are empty), and the Right options of G are a nonempty subset of the Right options of H (or both are
empty), then trivially G ⩾ H [16].
Recursive comparison tests for dicot and dead-ending games under misère play | 311
game, since he has more freedom to move and Left’s options are the same either way.
Without further tools, to prove this in misere play, we would have to prove that for
arbitrary X, if Left wins + X, then she also can win + + X.
In this paper, we present recursive comparison tests for restricted play in two
well-studied universes of games. In the summary of our paper in Section 4, we will
easily prove the domineering comparison above, and will do so in a way that can
be implemented algorithmically. This work is an application of results from abso-
lute game theory [8, 9] to the universes of dicots, 𝒟 and dead-ending games, ℰ . The
new comparison tests establish G ⩾𝒟 H or G ⩾ℰ H based only on outcomes of G
and H and comparisons among their options. The most significant contributions of
this paper are the introduction of strong outcome as a necessary condition for G ⩾ℰ H
and the introduction of perfect murder games as a way to directly calculate strong
outcome.
Section 2 reviews terminology and notation and defines the universes of 𝒟 and ℰ .
Section 3 proves the main results of the paper. Section 3.1 gives necessary conditions
for comparability in 𝒟 and ℰ . Section 3.2 proves that with one additional stipulation,
these conditions are sufficient for comparability in 𝒟. Finally, Section 3.3 proves that
a strengthening of that additional stipulation is necessary and sufficient for compara-
bility in ℰ . A moderate amount of new theory is developed in Section 3.3 to establish
this main result.
2 Definitions
In this section, we review standard definitions from normal-play combinatorial games
in the context of misère play and then define restricted misère play and the universes
of dicots and dead-ending games.
purpose. Under misère play, they are defined as follows, with L >R:
L if Gℒ = 0,
oL (G) = {
max oR (GL ) otherwise,
R if Gℛ = 0,
oR (G) = {
min oL (GR ) otherwise;
that is, oL (G) = L if and only if Left wins G playing first, and so on. The overall outcome
can then be defined by the pair of left-outcome and right-outcome:
{
{ L if (oL (G), oR (G)) = (L, L),
{
{
{
{N if (oL (G), oR (G)) = (L, R),
o(G) = {
{
{
{P if (oL (G), oR (G)) = (R, L),
{
{
{R if (oL (G), oR (G)) = (R, R).
The total order L > R induces the standard partial order on the outcomes: L >
N > R and L > P > R , whereas N and P are incomparable. In this paper, we
use o(G) to mean the outcome under misère play. If necessary, we may use o− (G) to
distinguish the misère outcome from the normal outcome o+ (G).
In a disjunctive sum of games the current player chooses exactly one of the game
components and plays in it, whereas the other components remain the same:
G + H = {Gℒ + H, G + H ℒ | Gℛ + H, G + H ℛ },
Two games G and H are equal if G ⩾ H and H ⩾ G, that is, if o(G + X) = o(H + X)
for all games X. In other words, games are equal if they can be interchanged in any
sum without affecting the outcome. Equality and inequality are dependent upon the
ending condition; we assume misère play in this paper but will use ⩾+ and ⩾− to dis-
tinguish normal from misère, respectively, when needed.
Definition 1. A universe 𝒰 is a nonempty set of games that satisfies the following prop-
erties:
1. option closure: if G ∈ 𝒰 and G′ is an option of G, then G′ ∈ 𝒰 ;
2. disjunctive sum closure: if G, H ∈ 𝒰 , then G + H ∈ 𝒰 ;
3. conjugate closure: if G ∈ 𝒰 , then G ∈ 𝒰 .
Restricted misère game theory uses weakened definitions of equality and inequal-
ity to study games modulo a universe 𝒰 :
The idea is that we may get equality and comparability, and perhaps reductions
and invertibility, “modulo 𝒰 ”, even if the relations do not hold in general or do not
hold in a larger universe.
For example, the game ∗ = {0 | 0} is invertible modulo 𝒟 [1] but not in ℰ [12]:
∗ + ∗ ≡𝒟 0, but ∗ + ∗ ≡ℰ̸ 0. The games 1 = {0 | ⋅} and 1 = {⋅ | 0} are additive inverses
modulo ℰ (and thus also mod 𝒟): 1 + 1 ≡ℰ 0, but this is not true in general unrestricted
misère play.
The universes of dicots and dead-ending games have proven fruitful for misère
analysis (see [13]), and we continue the development of that theory by introducing
comparison tests for 𝒟 and ℰ .
Recently, [8] introduced absolute combinatorial game theory, a general theory for
combinatorial games under nonspecified ending condition. The theory applies to ab-
314 | U. Larsson et al.
solute universes, which are defined by the parental property: for all nonempty sets of
games S, T ⊂ 𝒰 , if G is a game with Gℒ = S and Gℛ = T, then G is also in 𝒰 . Impar-
tial games are not parental; for example, 0 and ∗ are in the impartial universe, but
{0 | ∗} is not. Dicots are parental: if each player has a nonempty set of dicot games
as options, then the game is a dicot. Likewise, dead-ending games are parental. Our
comparison tests for dicots and dead-ending games are specific adaptations of results
from absolute game theory.
3 Recursive comparison
In this section, we develop our main results. In Section 3.1, we review a result from [8]
that gives necessary and sufficient conditions for G ⩾ H in any absolute universe. In
that paper the conditions are named the Proviso and the Maintenance property. The
Maintenance property is confirmed recursively, but in general the Proviso is not. In
Section 3.2, we show how the Proviso reduces to o(G) ⩾ o(H) when the universe is the
set of all dicots 𝒟, and this gives an entirely constructive comparison test for dicots.
The same idea is implicit in [2], where it is stated in terms of the down-linked relation
[16], now generalized by the Maintenance property.
In Section 3.3 we present our main, original results: we show that for the dead-
ending universe, ℰ , the Proviso reduces to a consideration of specific end games which
we call perfect murders. As in 𝒟, the result is a completely recursive comparison test
for G ⩾ℰ H.
Theorem 1 (Proviso and Maintenance [8]). Let 𝒰 be an absolute universe, and let
G, H ∈ 𝒰 . Then G ⩾𝒰 H if and only if the following hold:
Proviso:
1. oL (G + X) ⩾ oL (H + X) for all Left ends X ∈ 𝒰 ;
2. oR (G + X) ⩾ oR (H + X) for all Right ends X ∈ 𝒰 .
Maintenance:
1. ∀H L ∈ H ℒ , ∃GL ∈ Gℒ : GL ⩾𝒰 H L or ∃H LR ∈ H Lℛ : G ⩾𝒰 H LR ; and
2. ∀GR ∈ Gℛ , ∃H R ∈ H ℛ : GR ⩾𝒰 H R or ∃GRL ∈ GRℒ : GRL ⩾𝒰 H.
Recursive comparison tests for dicot and dead-ending games under misère play | 315
The idea behind the Maintenance property is that in an absolute universe when
playing G + H with G ⩾𝒰 H, Left can always “maintain” her position: no matter what
move Right makes in G + H, Left can bring the game back to a position that is just
as good for Left as before Right moved. In some sense, this is a generalization of the
fact that in normal play, G ⩾ H ⇒ G + H ⩾ 0 (recall that we do not have this in
misère play, because H + H is generally not equal to 0). Note that a Right move in H is
a Left move in H, and so the conditions in Theorem 1 are stated without reference to
conjugates.
Note that all inequality relations in this section are considered with misère-play
ending condition, unless otherwise specified; if necessary, we use ⩾− for misère play
and ⩾+ for normal play.
Incidentally, Theorem 1 implies the existence of an order-preserving map of
misère-play into normal-play, that is, if G ⩾−𝒰 H, then G ⩾+ H. This result is al-
ready known [2, 8], but we give the argument in Corollary 1 below to illustrate how it
follows from Theorem 1. We can gain some intuition for the idea by considering the
game {n | −n} for a large integer n. Using Chess terminology, this game acts like a
“large zugzwang” under misère play: players do not want to be the first to play in this
position, and so in both G + {n | −n} and H + {n | −n}, players should play G and H
with a “normal-play strategy”, trying to get the last move and force the opponent to
play first on the zugzwang part. Thus if we do not have G ⩾+ H, then we cannot have
G ⩾−𝒰 H.
Proof. Suppose you have G ⩾−𝒰 H, and so you also have conditions (1) and (2) from
Theorem 1. We need to show that Left wins second on G + H under normal play. Right’s
options are of the form GR + H or G + H L ; in each case, the Maintenance property
guarantees the existence of a response for Left. Moreover, that response always brings
the game to another position of the form X + Y where X ⩾−𝒰 Y. Thus, in all followers of
G + H, Left will always be able to reply to any Right move, and so Left will win G + H
under normal play.
2. ∀H L ∈ H ℒ , ∃GL ∈ Gℒ : GL ⩾𝒟 H L or ∃H LR ∈ H Lℛ : G ⩾𝒟 H LR ;
3. ∀GR ∈ Gℛ , ∃H R ∈ H ℛ : GR ⩾𝒟 H R or ∃GRL ∈ GRℒ : GRL ⩾𝒟 H.
For dead-ending games, the base case will be that X is any left end, not necessarily
0; this brings us back to the Proviso:
1. oL (G + X) ⩾ oL (H + X) for all Left ends X ∈ 𝒰 ;
2. oR (G + X) ⩾ oR (H + X) for all Right ends X ∈ 𝒰 ;
ô R (G) = max
⏟⏟⏟⏟⏟⏟⏟ {oR (G + Y)},
Right-end Y
respectively.
0 if n = 0,
Mn = {
{⋅ | 0, Mn−1 } if n > 0.
M0 M1 M2 M3 M4
Proof. We need to show that o(Mn + X) ⩾ o(Mn+1 + X) for all n > 0 and X ∈ ℰ . Let n > 0
and suppose Right wins Mn + X (first or second or both). We need to show that Right
can win Mn+1 + X.
Right’s winning move in Mn + X must be to 0 + X ′ for a follower X ′ of X (in which
Left moves to a Right end). Say this move to 0 occurs at level k of Mn . Then Right can
win Mn+1 + X by following exactly the same strategy, moving to 0 + X ′ at level k of
Mn+1 .
Proof. Let G ∈ ℰ be a fixed Left-end of rank k > 0. By Lemma 1 it suffices to show that
G ⩾ℰ Mk .
Let X be an arbitrary game in ℰ . We must prove that o(G + X) ⩾ o(Mk + X). If X = 0,
then o(G + X) = L = o(Mk + X), because both G and Mk are nonzero left-ends. Now
assume that rank(X) > 0.
Suppose Left wins Mk + X going first. Then Left’s good first move is to Mk + X L . By
induction, G + X L is at least as good as this move, so Left can also win G + X going first.
Suppose Left wins Mk + X going second; so all Right moves in Mk + X are left- or
next-win. Consider Left playing second in G + X. There are three possibilities:
(1) If there is a Right option GR = 0, then since this is also an option of Mk , Left wins
0 + X.
(2) If Right moves to GR + X, with GR ≠ 0, then by induction GR ⩾ℰ Mk−1 . Since Left
wins from Mk−1 + X, Left also wins from GR + X.
(3) If Right moves to G+X R , then by induction this is at least as good for Left as Mk +X R ,
which Left wins.
come of G + a perfect murder left-end, and also G + 0. Theorem 3 shows that Mk−1 will
yield the minimum outcome of G with a nonzero left end.
We now have a constructive way to compute the strong left-outcome and strong
right-outcome. We can pair the two outcomes to give the strong outcome of the game.
{
{ L if (ô L (G), ô R (G)) = (L, L),
{
{
{
{N if (ô L (G), ô R (G)) = (L, R),
o(G)
̂ ={
{
{
{ P if (ô L (G), ô R (G)) = (R, L),
{
{
{R if (ô L (G), ô R (G)) = (R, R).
With the concept of strong outcome, we now have a recursive comparison test for
dead-ending games.
320 | U. Larsson et al.
Example 6. Consider G = {−1 | 1}. Then ô L (G) = L and ô R (G) = R. Therefore o(G)
̂ =
R RL
N = o(0).
̂ Note that G = 1 and G = 0 ⩾ℰ 0, so Theorem 5 gives G ⩾ℰ 0. Symmetri-
cally, G ⩽ℰ 0. Hence G ≡ℰ 0.
+ ⩾ℰ
Bibliography
[1] M. R. Allen, Peeking at partizan misère quotients, in R. J. Nowakowski (Ed.) Games of No Chance
4, MSRI Publ. 63 (2015), 1–12.
[2] P. Dorbec, G. Renault, A. N. Siegel, and E. Sopena, Dicots, and a taxonomic ranking for misère
games, J. Combin. Theory Ser. A 130 (2015), 42–63.
[3] E. R. Berlekamp, J. H. Conway, and R. K. Guy, Winning Ways for Your Mathematical Plays, Vol. 2,
A. K. Peters Ltd., MA, 2001.
[4] A. Dwyer, R. Milley, and M. Willette, Misère domineering on 2 × n boards, Integers, 21B (2021),
#A10.
[5] A. Dwyer, R. Milley, and M. Willette, Dead-ending day-2 games under misere play,
Undergraduate Thesis, Grenfell Campus – Memorial University, 2020.
[6] J. M. Ettinger, Topics in combinatorial games, PhD Thesis, University of Wisconsin–Madison,
1996.
[7] S. Huntemann, The class of strong placement games: complexes, values, and temperature, PhD
Thesis, Dalhousie University, 2018.
[8] U. Larsson, R. J. Nowakowski, and C. P. Santos, Absolute combinatorial game theory, in S.
Huntemann and U. Larrson (Eds.) Games of No Chance 6, MSRI Publ. (2022), to appear.
322 | U. Larsson et al.
[9] U. Larsson, R. J. Nowakowski, and C. P. Santos, Game comparison through play, Theoret.
Comput. Sci. 725 (2018), 52–63.
[10] U. Larsson, R. J. Nowakowski, and C. P. Santos, manuscript.
[11] G. A. Mesdal and P. Ottaway, Simplification of partizan games in misère play, Integers 7 (2007),
#G6.
[12] R. Milley and G. Renault, Dead ends in misère play: the misère monoid of canonical numbers,
Discrete Math. 313 (2013), 2223–2231.
[13] R. Milley and G. Renault, Restricted developments in partizan misère game theory, in U.
Larsson (Ed.) Games of No Chance 5, MSRI Publ. 70 (2018).
[14] T. E. Plambeck, Taming the wild in impartial combinatorial games, Integers 5 (2005), #G5.
[15] T. E. Plambeck and A. N. Siegel, Misère quotients for impartial games, J. Combin. Theory Ser. A
115 (2008), 593–622.
[16] A. N. Siegel, Misère canonical forms of partizan games, in R. J. Nowakowski (Ed.) Games of No
Chance 4, MSRI Publ. 63 (2015).
Urban Larsson, Richard J. Nowakowski, and Carlos P. Santos
Impartial games with entailing moves
Abstract: Combinatorial game theory has also been called “additive game theory”
whenever the analysis involves sums of independent game components. Such dis-
junctive sums invoke comparison between games, which allows abstract values to be
assigned to them. However, there are rulesets with entailing moves that break the al-
ternating play axiom and/or restrict the other player’s options within the disjunctive
sum components. These situations are exemplified in the literature by a ruleset such
as nimstring, a normal play variation of the classical children’s game dots & boxes,
and top entails, an elegant ruleset introduced in the classical work Winning Ways by
Berlekamp, Conway, and Guy. Such rulesets fall outside the scope of the established
normal play theory. Here we axiomatize normal play via two new terminating games,
∞ (Left wins) and ∞ (Right wins), and achieve a more general theory. We define affine
impartial, which extends classical impartial games, and we analyze their algebra by
extending the established Sprague–Grundy theory with an accompanying minimum
excluded rule. Solutions of nimstring and top entails are given to illustrate the the-
ory.
1 Introduction
Combinatorial game theory (CGT), as described in [1, 3, 4, 7], considers disjunctive
sums of normal play games. To evaluate the outcome of a sum of such games, it suffices
to analyze the components individually and then add the individual values.
However, some classical impartial rulesets, such as nimstring and top entails,
fall slightly outside the usual CGT axioms. In nimstring, certain moves require a
player to play again, or carry-on, which is a violation of the alternating play axiom.
In top entails, certain moves enforce the next player to play in the same component,
which violates the standard definition of a disjunctive sum. Thus the values of individ-
ual components are no longer a relevant measure, given the standard CGT axioms. The
Acknowledgement: Carlos P. Santos was partially Supported by FCT – Fundação para a Ciência e Tec-
nologia, under the project UIDB/04721/2020.
https://doi.org/10.1515/9783110755411-017
324 | U. Larsson et al.
type of moves mentioned in this paragraph will be gathered under the term entailing
moves.1
The purpose of this paper is to extend impartial normal play games sufficiently to
include games with entailing moves. While accomplishing this, we expand the classi-
cal Sprague–Grundy theory to fit this extension.
We will rebuild the normal play axioms by using so-called terminating games, or
infinities, ∞ and ∞. Here we focus on the impartial setting, and the general compre-
hensive theory for partizan games will appear in [5].2 These theories are called affine
impartial and affine normal play, respectively.
Although we consider only impartial games in this paper, we will keep the players
distinguished as Left and Right. In particular, Left wins if either player plays to ∞ in
any component, and Right wins in case of play to ∞. Note that the normal play zero
is restored by defining 0 = {∞ | ∞}, a first player losing position.
It is well known that in classical combinatorial game theory the impartial values
are nimbers. We will prove that there is exactly one more value modulo affine impar-
tial, a game K, called the moon. This value was anticipated in the classical work Win-
ning Ways by Berlekamp, Conway, and Guy. In [3, vol. 2, p. 398], we can read “A loony
move is one that loses for a player, no matter what other components are.” Before de-
veloping the theory, we illustrate how the infinities are used in the motivating rulesets,
nimstring [2, 3] and top entails [3, vol. 2].
Let us first briefly mention the organization of the paper. To facilitate the devel-
opment of the new impartial theory, Section 2 considers the basic properties of unre-
stricted affine normal play, aiming for a game comparison result, Theorem 7. The affine
impartial theory is developed in Section 3. The main result is Theorem 17, which shows
that values in this extension are the nimbers plus one more value. Theorem 20 gives
an algorithm to find the value of a given position, and notably, if there are no infini-
ties in the options, then the nimbers are obtained by the usual mex-rule. We finish off
with two case studies. In Section 4, we compute the value of an interesting nimstring
position, anticipated in Section 1.1. In Section 5, we compute the values for top en-
tail heaps of sizes 1 through 12, and Theorem 21 provides theoretical justification for
computing top entails values.
Figure 17.1 shows an example position, where no square can be completed in the
next move. Later, through the new theory, we will see that the position H equals ∗2
modulo affine impartial.
In Figure 17.2, we show a position G with two options, one of which is an entailing
move. Namely, if the top bar is drawn, then the next player continues, but if the middle
bar is drawn, then the current player has to carry-on.
Figure 17.2: A nimstring position G with its two options, a “double-box” and an entailing carry-on
position.
When we develop the theory, we will see that the position G, to the left in Figure 17.2,
is the abstract game
The option 0 is obtained by drawing the top bar. The intuition for this is as follows:
if a player can win X, then he can also win “double box”+X because the player who
moves inside the “double box” has to play again. Due to that, “double box” is neutral
in all disjunctive sums, including the case X = { | }.
If a player draws the middle bar in G, then they have to carry-on, and this is rep-
resented by the abstract option {∞ | 0} if Left moved. There is an infinite urgency in
this game: Right has to play here or lose. So the effect is the desired: Left plays again,
and alternating play is restored. Hence disjunctive sum play is also restored within
the affine impartial convention. Moreover, the Right option in this threat should be 0,
because Left loses by playing this option if G is played alone. If the sum is G + H with
H as in Figure 17.1, then the next player wins by playing this entailing middle bar in G.
326 | U. Larsson et al.
The option in Figure 17.4 splits G into two piles, and the next player’s options are unre-
stricted. By the terminating effect of playing in a heap of size one this composite game
should be equal to the game H = {∞ | ∞}.
The option in Figure 17.5 is an entailing move, and the next player must continue in
this component, even if other moves are available. Therefore the game form of the
entailing option in Figure 17.5 is
{∞ | 1 + 1, 1entail }
if Left just moved, where 1 denotes a heap of size one. The terminating threat forces
Right to play here, instead of possibly using other options.
Intuitively, either way of responding reduces to a game of the form H = {∞ | ∞}.
In conclusion, the heap of size three should be equal to the game H, and disjunctive
sum play has been restored. All this intuition will be rigorously justified in the coming
complete theory for affine impartial play.
Impartial games with entailing moves | 327
It turns out that affine impartial games require only a small extension to the Sprague–
Grundy theory. Namely, the game in (17.1), obtained from the nimstring position in
Figure 17.2, equals the game H = {∞ | ∞} in the previous paragraph, modulo affine
impartial, and later we will devote the value K to the equivalence class of such games.
Addition of games is defined as usual, apart from items 3 and 4. The fifth item is
natural in terms of perfect play, since if ∞ appears, then ∞ cannot appear and vice
versa.
The definitions of equality and partial order of games are based on the outcome
diamond.
Note that the exclusion of the infinities does not diminish the generality of the
definition, but is necessary due to Axiom 1.5. As usual, we have the following obser-
vations. If G = H, then replacing H by G or G by H do not hurt the players under
any circumstances. Similarly, if G ≽ H then replacing H by G does not hurt Left, and
replacing G by H does not hurt Right.
Of course, all checks are asymmetric, apart from the “trivial check” {∞ | ∞}.
A player would not use this check, because the opponent “check mates” by defend-
ing.
In general, when we say form, we mean the literal form, and when we say game,
we usually mean (any member in) the full equivalence class of games. When we write
G ∈ Np ∞ , we usually refer to the literal form, but the context may decide.
Some classical theorems are still available in Np ∞ .
{
{ ∞ if G = ∞,
{
{
G = {∞ if G = ∞,
?
{ ? ?
{
{ ℛ ℒ
{G | G } otherwise,
{
where Gℒ denotes the set of literal forms GL for GL ∈ Gℒ , and similarly for Gℛ .
? ?
holds. Otherwise, let us verify that G+ G is a P -position. If Left, playing first, chooses
?
GL + G, because this game is not ∞ (G is not a check), then Right can answer with
?
that option is equal to zero. Since by Corollary 1 that option is a P -position, Right
wins. Analogous arguments work for the other options of the first player, and so G+ G
?
Suppose now that G is not a Conway form. Because it is a Conway game, by defi-
nition it is equal to some G′ ∈ Np C . The first paragraph proved that G′ + G′ = 0. Also,
?
? ?
by symmetry, G is equal to G′ . Therefore G′ + G′ = 0 implies G+ G= 0.
? ?
– G = H if and only if G + H ∈ P .
?
In a follow-up paper [5], where we study the full game space Np ∞ , we provide
a solution of the general case of G ≽ H.
Of course, a nonquiet game either has no option or is a check, and so (unless a triv-
ial check) is by definition asymmetric. However, this is the only exception of symmetry
in the world of affine impartial games. It is easy to check that Im∞ satisfies the stan-
dard closure properties of combinatorial games, that is, the closure of taking options,
addition, and conjugates.
The following result must hold for any class of games that claims to be “impartial”.
3 In terms of ruleset: here “affine impartial” is an abbreviation of affine normal play impartial in the
sense that if the player to move cannot complete their move, then they lose.
332 | U. Larsson et al.
Proof. This proof uses a strategy-stealing argument. Suppose that G ∈ L . Then Left
wins G playing first with some option GL . Hence by symmetry Right wins G playing
first with GL , which contradicts G ∈ L . A similar argument holds against G ∈ R .
?
We want to restrict our analysis to Im∞ . Therefore we define equality modulo Im∞ .
Notation 11. Let nim ⊆ Im∞ denote the subset of affine impartial games that equal
nimbers.
It is well known that in classical combinatorial game theory the impartial values
are nimbers. We will prove that there is exactly one more value modulo Im∞ , a game
K, called moon. In [3, vol. 2, p. 398], we can read “A loony move is one that loses for
a player, no matter what other components are.” The following general definition is
motivated by that idea.
Theorem 13 (Loony uniqueness). All loony games are equal modulo Im∞ .
Proof. Consider two loony games G and G′ . We know that all quiet X ∈ Im∞ belong to
N ∪ P . By the definition of a loony game we have G + X ∈ N and G′ + X ∈ N . On the
other hand, if X ∈ Im∞ is not quiet, then ∞ ∈ X ℒ , and since X is impartial, ∞ ∈ X ℛ .
Hence G + X ∈ N and G′ + X ∈ N . In all cases, o(G + X) = o(G′ + X) = N , and the
theorem is proved.
Observation 14. Two loony games may be different modulo Np ∞ but equal modulo
Im ∞ . The games {{∞ | 0}, 0 | {0 | ∞}, 0} and {{∞ | 2} , 0 | {−2 | ∞} , 0} are loony. These
games are different modulo Np ∞ . Left, playing first, loses {{∞ | 0}, 0 | {0 | ∞}, 0} − 1
and wins {{∞ | 2} , 0 | {−2 | ∞} , 0} − 1. However, as will follow by theory developed
here, we cannot distinguish these two games modulo Im∞ .
To prove an affine impartial minimum excluded rule, we separate the options into
two classes.
Definition 10 (Immediate nimbers). Let G ∈ Im∞ . The set of G-immediate nimbers, de-
noted SG , is the set SG = Gℒ ∩ nim .
Definition 11 (Protected nimbers). Consider a game form G ∈ Im∞ . The set of G-pro-
tected nimbers PG is
1. PG = nim , if ∞ ∈ Gℒ ;
→ →
2. PG = {∗n : G L + ∗n ∈ L , G L ∈ Gℒ } otherwise.
→
The second item says: if ∞ ∈ ̸ Gℒ , then ∗n ∈ PG if there is a check G L = {∞ |
→
Lℛ
G } ∈ Gℒ such that Right, playing first, loses G L + ∗n, that is, playing first, Left is
protected against those nimbers in a disjunctive sum.
Similarly to Definition 10, to obtain the same set, we could have defined PG with
respect to Right options.
Note that PK = nim . This statement holds for the literal form K = ±∞. However,
we can show that by using instead the form K = {{∞ | 0}, 0 | {0 | ∞}, 0} as in (17.1),
then PK = nim \ {0}. The output of “protected” is sensitive to which form we choose.
When the underlying game form is understood, we simply refer to the immediate
and protected nimbers, respectively.
334 | U. Larsson et al.
Example 15. Let G ∈ Im∞ be such that the Left options are 0, ∗2, and {∞ | {∗ | ∞}, 0}.
Of course, SG = {0, ∗2}. On the other hand, playing first, Left can use the check to win
G + ∗. Because of that, PG = {∗}. An important observation is that although Left is
protected against the nimber ∗, Left cannot force a Left move to ∗ in G, but if Right
moves to 0, then Left wins G + ∗ anyway
Sometimes, Right can manoeuvre Left’s eventual play to a nimber, or worse, via a
sequence of “check upon check”.
Example 16. The form G = {∗2, {∞ | {0, ∗4 | ∞}, 0} | ∗2, {0, {∞ | 0, ∗4}| ∞}} is maneu-
verable. If Left avoids the immediate nimber ∗2 by checking, then Right can still force
Left to move to one of the nimbers 0 or ∗4.
Proof. After a Left first move in G, if needed, Right can force with checks a Left move
to a nimber or a move by either player to ∞. Let C be the set of nimbers that can arise
through this forcing strategy by Right. Then C is finite, because we study short games.
Let ∗n be a nimber such that for all ∗m ∈ C, we have n > m. In G + ∗n, after a first
check, say, to GL + ∗n, Right forces with checks a move by either player to ∞ or a Left
move to ∗m + ∗n (n > m). In the second case, after the sequence, Right wins with a
TweedleDee-TweedleDum move. Thus Left can protect against at most a finite number
of nimbers. That explains why PG is finite in case of maneuverable games.
Let mex(X) denote the smallest nonnegative integer not in X. Let 𝒢 denote the set
of Sprague–Grundy values of a set of nimbers, that is, if S = {∗ni }, then 𝒢 (S) = {ni }.
Proof. By Lemma 2 we know that SG ∪ PG is finite. Let n = mex(𝒢 (SG ∪ PG )). Let us
argue that the game G + ∗n ∈ P . If the first player moves in G to a nimber ∗m ∈ SG ,
then because n is excluded from 𝒢 (SG ), he loses. If the first player moves in G to a
quiet not nimber G′ , then because G′ is not a nimber, G′ + ∗n ∈ N (Theorem 10), and
the first player also loses. If the first player moves in G, giving a check, then because
n is excluded from 𝒢 (PG ), he also loses. Finally, if the first player moves to G + ∗n′
(n′ < n), then because n is the minimum excluded from 𝒢 (SG ∪ PG ), he loses because
the opponent has a direct TweedleDee-TweedleDum move or wins with a check. Hence
G = ∗n by Theorem 10.
Proof. Consider G, H ∈ Im∞ \ nim . For a contradiction, assume that the sum of the
birthdays b = b(G) + b(H) is the smallest possible such that G + H ∈ P . Note that b > 0
by the assumptions on G and H. Without loss of generality, we will analyze the move
from G + H to G + H L . First, we prove two claims that concern local play in G and H,
respectively.
Claim 1. Playing second in G, Left can avoid Left moves to nimbers and moves by ei-
ther player to ∞ until the first Right-quiet move.
Proof of Claim 1. Suppose that Right, playing first in G, could force a Left move to a
nimber or a move by either player to ∞. If so, then in G + H, by giving checks in G,
← ←
Right could force some G R L⋅⋅⋅ R L + H = ∗n + H (Right’s turn) or a move by either player
to ∞ + H. Of course, the second situation would be a victory for Right. Regarding the
first case, at that moment the position would be ∗n + H. Because H is not a nimber, by
Theorem 10 we would have ∗n + H ∈ N , which is a winning move for Right. In either
case, Right, as first player, would win. That would contradict G + H ∈ P .
Claim 2. There is H L such that Left can avoid Left moves to nimbers and moves by
either player to ∞ until the first Right-quiet move.
Theorem 17 (Affine impartial values). Every affine impartial form equals a nimber or
the game K (mod Im∞ ).
Observation 18. A form can be loony modulo Im∞ and not loony modulo Np ∞ . An
example is the form G = {∗, {∞ | ∗} | ∗, {∗ | ∞}}. This game is not loony modulo Np ∞
336 | U. Larsson et al.
Theorem 19. The game K is absorbing modulo Im∞ , that is, K + Y =Im∞ K for all
Y ∈ Im∞ .
Proof. Since K = ±∞, regardless of what X ∈ Im∞ is, the first player wins both K +Y +X
and K + X. Therefore by the definition of equality of games, K + Y =Im∞ K.
Theorem 20 (Affine impartial minimum excluded rule). Let G ∈ Im∞ . We have the fol-
lowing possibilities:
– If SG ∪ PG = nim , then G = K and mex(𝒢 (SG ∪ PG )) = ∞;
– If SG ∪ PG ≠ nim , then G = ∗ (mex(𝒢 (SG ∪ PG ))).
Corollary 3. If all the options of a game G ∈ Im∞ are quiet, then G is a nimber.
Proof. If all the options of a game G ∈ Im∞ are quiet, then PG = ⌀. Therefore SG ∪ PG =
SG ≠ nim , and G = ∗ (mex(𝒢 (SG ))) by Theorem 20.
All (a), (b), (c), (d), and (e) are 𝒫 -positions. The game value of (f) is K =
{{∞ | 0}, 0 | {0 | ∞}, 0}.
In position (l) the central horizontal move is option (d), which is equal to 0. The
other options are (f) and (g), which are equal to K. Therefore the literal form is
l = {0, K, K | 0, K, K}
with Sl = {0} and Pl = ⌀. Applying the affine impartial minimum excluded rule, we
conclude that the position is ∗.
338 | U. Larsson et al.
Now we are ready for (n), a more complex situation. The literal form is
n = {h, i, k, {∞ | l} | h, i, k, {l | ∞}},
that is,
n = {K, K, K, {∞ | ∗} | K, K, K, {∗ | ∞}}.
Hence Sn = ⌀, and Pn = nim \ {∗}. Applying the affine impartial minimum excluded
rule, we conclude that the position is ∗.
The form is {n, m, c | n, m, c}, that is, {∗, ∗, 0 | ∗, ∗, 0} = ∗2. Here n represents a play of
the top or bottom bar, m represents a play of some middle bar, and c represents play
of the left line.
Consider a stack of size n. We claim that an entailing move by Left does not protect
her against an element in Sn−1 . To see this, let ∗m ∈ Sn−1 . Moving in n + ∗m, if Left
chooses {∞ | (n − 1)ℛ } + ∗m, then Right answers ∗m + ∗m and wins. On the other
hand, we observe that an entailing move by Left does not protect her against the el-
ements of Pn−1 . To see this, let ∗m be an element of Pn−1 . Moving in n + ∗m, if Left
chooses {∞ | (n − 1)ℛ } + ∗m, then because in n − 1, Right is protected against ∗m, he
340 | U. Larsson et al.
has an entailing winning move in the first component. Therefore we have the general
recursion
The set Sn is composed of the values of the positions of the form ℓ + m, ℓ + m = n, ℓ, m >
0, and disregarding any sum where K appears. Hence the recurrence of top entails
is as follows.
Theorem 21. The sets P0 = S0 = ⌀, and Pn = nim \(Sn−1 ∪Pn−1 ) and Sn = {𝒢 (ℓ+m), ℓ, m ≠
K} for all n > 0.
Proof. This is explained in the above paragraph.
0 ⌀ ⌀ ⌀ 0
1 ⌀ nim nim ∞
2 ⌀ ⌀ ⌀ 0
3 ⌀ nim nim ∞
4 {0} ⌀ {0} 1
5 ⌀ nim\{0} nim\{0} 0
6 {∗} {0} {0, ∗} 2
7 {0} nim\{0, ∗} nim\{∗} 1
8 {0, ∗2} {∗} {0, ∗, ∗2} 3
9 {∗} nim\{0, ∗, ∗2} nim\{0, ∗2} 0
10 {0, ∗3} {0, ∗2} {0, ∗2, ∗3} 1
11 {0, ∗2} nim\{0, ∗2, ∗3} nim\{∗3} 3
12 {0, ∗, ∗2} {∗3} {0, ∗, ∗2, ∗3} 4
With the recursion, we know that n = K if and only if Sn−1 ∪ Pn−1 ⊆ Sn . That happens
for n = 2403, n = 2505, and n = 33243, as mentioned in [8]. One of three possibilities
must happen: a) a finite number of finite nimbers, b) a finite number of loony values,
and c) an infinite number of finite nimbers and an infinite number of loony values.
However, it is an open problem to know what case happens.
At the first Combinatorial Games Workshop at MSRI, John Conway proposed that
an effort should be made to devise some ruleset with entailing moves that is nontrivial
but (unlike top entails) susceptible to a complete analysis. As a sequel to this work,
we are finalizing a paper [6] with a proposal of a ruleset to meet Conway’s suggestion.
Impartial games with entailing moves | 341
Bibliography
[1] M. Albert, R. J. Nowakowski, D. Wolfe, Lessons in Play: An Introduction to Combinatorial Game
Theory, A. K. Peters, 2007.
[2] E. R. Berlekamp, The Dots and Boxes Game: Sophisticated Child’s Play, A. K. Peter’s Ltd., 2000.
[3] E. R. Berlekamp, J. H. Conway, R K. Guy, Winning Ways, Academic Press, London, 1982.
[4] J. H. Conway, On Numbers and Games, Academic Press, 1976.
[5] U. Larsson, R. J. Nowakowski, C. P. Santos, Combinatorial games with checks and terminating
moves, preprint.
[6] U. Larsson, R. J. Nowakowski, C. P. Santos, Electric cables, preprint.
[7] A. N. Siegel, Combinatorial Game Theory, American Math. Soc., 2013.
[8] J. West, New values for top entails, in Games of No Chance, MSRI Publications, 29, 345–350,
1996.
James B. Martin
Extended Sprague–Grundy theory for locally
finite games, and applications to random
game-trees
Abstract: The Sprague–Grundy theory for finite games without cycles was extended
to general finite games by Cedric Smith and by Aviezri Fraenkel and coauthors. We
observe that the same framework used to classify finite games also covers the case of
locally finite games (that is, games where any position has only finitely many options).
In particular, any locally finite game is equivalent to some finite game. We then study
cases where the directed graph of a game is chosen randomly and is given by the tree
of a Galton–Watson branching process. Natural families of offspring distributions dis-
play a surprisingly wide range of behavior. The setting shows a nice interplay between
ideas from combinatorial game theory and ideas from probability.
1 Introduction
Among the plethora of beautiful and intriguing examples to be found in Elwyn
Berlekamp, John Conway, and Richard Guy’s Winning Ways, there is the game of
Fair Shares and Varied Pairs [1, Ch. 12]. The game is played with some number of
almonds, which are arranged into heaps. A move of the game consists of either
– dividing any heap into two or more equal-sized heaps (hence “fair shares”), or
– uniting any two heaps of different sizes (hence “varied pairs”).
The only position from which no move is possible is the one where all the almonds are
completely separated into heaps of size 1. When that position is reached, the player
who has just moved is the winner.
Fair Shares and Varied Pairs is a loopy game: the directed graph of game po-
sitions has cycles, so the game can return to a previously visited position. The way in
which the loopiness manifests itself depends on the number of almonds:
– With three or fewer almonds, there are no cycles. The game is nonloopy.
Acknowledgement: Many thanks to Alexander Holroyd and Omer Angel for conversations relating to
this work, particularly during the Montreal summer workshop on probability and mathematical physics
at the CRM in July 2018. I am grateful to an anonymous referee, whose comments have considerably
improved the paper.
James B. Martin, Department of Statistics, University of Oxford, Oxford, United Kingdom, e-mail:
martin@stats.ox.ac.uk
https://doi.org/10.1515/9783110755411-018
344 | J. B. Martin
– With four to nine almonds, the graph has loops, but all positions are equivalent
to finite nim heaps. Hence in any position, either the first player has a winning
strategy, or the second player has a winning strategy; furthermore, the same is
true for the (disjunctive) sum of any two positions or for the sum of a position
with a nim heap. Berlekamp, Conway, and Guy call such behavior latently loopy.
“This kind of loopiness is really illusory; unless the winner wants to take you on
a trip, you won’t notice it.”
– With 10 almonds, still any position has either a forced win for the first player or
a forced win for the second player. However, now there exist some patently loopy
positions that are not equivalent to finite nim heaps. If we take the sum of two
such positions, or the sum of such a position with a nim heap, we can obtain a
game where neither player has a winning strategy – the game is drawn with best
play.
– With 11 or more almonds, there exist blatantly loopy positions where the game is
drawn with best play.
Figure 18.1: An example of a finite directed graph that could arise from the Galton–Watson tree
model considered in the introduction with out-degrees 0 and 4.
The game played with such a tree as its game-graph displays very interesting parallels
with that of Fair Shares and Varied Pairs described above. The behavior depends
on the value of the parameter p. We will find that there are thresholds a0 = 1/4, a1 ≈
0.52198, and a2 = 53/4 /4 ≈ 0.83593 such that the following hold.
– For p ≤ a0 , the tree is finite with probability 1.
– For a0 < p ≤ a1 , there is positive probability that the tree is infinite. However,
with probability 1, all its positions are equivalent to finite nim heaps, and so in
particular every position has a winning strategy for one or the other of the players.
– For a1 < p ≤ a2 , still with probability 1, every position has a winning strategy for
one player or the other. However, there is now positive probability that the tree has
positions that are not equivalent to finite nim heaps. The sum of two such games,
or the sum of such a game with a nim heap, may be drawn with best play.
– For p > a2 , with positive probability, the tree has positions that are drawn with
best play.
first-player wins (if 0 ∈ 𝒜) or draws (if 0 ∉ 𝒜). Again, we have equivalence between
two games if and only if they have the same (extended) Sprague–Grundy value.
Smith [13] already envisages extensions of the theory to infinite games, involving
ordinal-valued Sprague–Grundy functions. An extension of a different sort to infinite
graphs was done by Fraenkel and Rahat [3], who extend the finite nonloopy Sprague–
Grundy theory to infinite games that are locally path-bounded in the sense that for any
vertex of the game-graph, the set of paths starting at that vertex has finite maximum
length.
In this paper, we observe that the extended Sprague–Grundy values, which clas-
sify finite games, are also enough to classify the class of locally finite games in which
every position has finitely many options. As a result, any such locally finite (perhaps
cyclic) game is equivalent to a finite (perhaps cyclic) game.
We then focus in particular on applying the theory to games whose directed graph
is given by a Galton–Watson tree, of which the 0-or-4 tree described in the previous
section is an example. Galton–Watson trees provide an extremely natural model of
a random game-tree. They have a self-similarity, which can be described as follows:
the root individual has a random number of children (distributed according to the off-
spring distribution), and then conditional on that number of children, the sub-trees of
descendants of each of those children are independent and have the same distribution
as the original tree.
Games on Galton–Watson trees (including normal play, misère play, and other
variants) are studied by Alexander Holroyd and the current author in [9]. There a par-
ticular focus was on determining which offspring distributions give positive probabil-
ity of a draw and on describing the type of phase transition that occurs between the
sets of distributions with and without draws. In this paper, we concentrate on normal
play; but, armed with the extended Sprague–Grundy theory, we can investigate, for ex-
ample, whether infinite-rank positions occur in games without draws (the case anal-
ogous to Berlekamp, Conway, and Guy’s “patently loopy” behavior described above).
This setting shows a very nice interplay between ideas from combinatorial game the-
ory and ideas from probability.
One tool on which we rely heavily is the study of the behavior of the game-graph
when the set 𝒫 of its second-player-winning positions is removed. This reduction be-
haves especially nicely in the Galton–Watson setting. For example, if we take a Galton–
Watson tree for which draws have probability 0, condition the root to be a first-player
win, and remove the set 𝒫 , then the remaining component connected to the root is
again a Galton–Watson tree with a new offspring distribution. Combining iterations
of this procedure with recursions involving the probability generating function of the
offspring distribution yields a lot of information about the infinite-rank positions that
can occur in the tree.
We finish by presenting three particular examples of families of offspring distri-
bution: the Poisson case, the geometric case, and the 0-or-4 case described above. In
these examples alone, we see a surprisingly wide variety of different types of behavior.
Extended Sprague–Grundy theory for locally finite games | 347
We now briefly describe the organization of the paper. In Section 2, we describe the
extended Sprague–Grundy theory for locally finite games. Although the setting is new,
the results can be written in a form that is almost identical to that of the finite case.
We proceed in a way that closely parallels the presentation of Siegel from Section IV.4
of [12] (with some variations of notation). The proofs given in [12] also carry over to the
current setting essentially unchanged, and for that reason, we do not reproduce them
here. A reader who is not already familiar with the extended Sprague–Grundy theory
for finite games may like to start with that section of [12] before reading on further
here.
In Section 3, we discuss the operation of removing 𝒫 -positions from a locally finite
game, and examine its effect on the Sprague–Grundy values of the positions which
remain. For the particular case of trees, we give an interpretation involving mex la-
bellings (labellings of the vertices of the tree by natural numbers which obey mex re-
cursions at each vertex).
In Section 4, we introduce games on Galton–Watson trees and develop the analy-
sis via graph reductions and generating function recursions.
Finally, examples of particular offspring distributions are studied in Section 5.
Informally, we consider two-player games with alternating turns; each turn con-
sists of moving from a position x to a position y, where y ∈ Γ(x). We consider normal
play: if we reach a terminal position, meaning a vertex with out-degree 0, then the next
player to move loses. Since the graphs we consider may have cycles or infinite paths,
it may be that play continues forever without either player winning.
Formally, a locally finite game is a pair G = (V, x) where V is a locally finite di-
rected graph (which is allowed to contain cycles) and x is a vertex of V. We will often
write just x instead of (V, x) when the graph V is understood. For example, for the out-
come function 𝒪, the Sprague–Grundy function 𝒢 , and the rank function (all defined
below), we will often write 𝒪(x), 𝒢 (x), and rank(x), rather than 𝒪((V, x)), 𝒢 ((V, x)),
and rank((V, x)). We use the fuller notation when we need to consider more than one
graph simultaneously (e. g., when considering disjunctive sums of games, or when
considering operations which reduce a graph by removing some of its vertices).
Let V be a directed graph, and let o be a vertex of V. If o has in-degree 0, and if
for every x ∈ V, there exists a unique directed walk from o to x, then we say that V is a
tree with root o. If x and y are vertices of a tree V with y ∈ ΓV (x), then we may say that
y is a child of x in V. We write height(x) for the height of x, which is the number of arcs
in the path from o to x.
either u = x and y ∈ ΓW (v), or x ∈ ΓV (u) and v = y. If V and W are both locally finite,
then so is V × W.
If G = (V, x) and H = (W, y) are locally finite games, we define their (disjunctive)
sum G + H as the locally finite game (V × W, (x, y)).
We have the following interpretation. A position of V × W is an ordered pair of a
position of V and a position of W. To make a move in the sum of games from position
(x, y) of V × W, we must either move from x to one of its options in V, or from y to one
of its options in W (and not both). A position (x, y) is terminal for V × W if and only if
x is terminal for V and y is terminal for W.
Now we define equivalence between two locally finite games G and H. The
games G and H are said to be equivalent, denoted by G = H, if 𝒪(G + X) = 𝒪(H + X)
for every locally finite game X.
Note here that we have defined equivalence within the class of locally finite games:
we required the equality to hold for every locally finite game X. The definition (and the
meaning of the results below) would be different if X ranged over a different set. How-
ever, it will follow from the extended Sprague–Grundy theory below that this equiva-
lence extends both the equivalence within the class of finite loopfree graphs, and that
within the class of finite graphs. In other words, two finite games are equivalent within
the class of finite games if and only if they are equivalent with the class of locally finite
games; also, two finite loopfree games are equivalent within the class of finite loopfree
games if and only if they are equivalent within the class of finite games.
0 if x is terminal,
𝒢0 (x) = {
∞ otherwise.
Then for n ≥ 1 and given x, write m = mex{𝒢n−1 (y), y ∈ Γ(x)}, and let
∞ if n < n0 ,
𝒢n (x) = {
m if n ≥ n0 .
350 | J. B. Martin
In the light of Proposition 2.2, we can now define the extended Sprague–Grundy
function 𝒢 in the case of a locally finite graph V. Let x ∈ V. If the second case of Propo-
sition 2.2 holds, and 𝒢n (x) = m for all sufficiently large n, then 𝒢 (x) = m. Otherwise,
we write
𝒢n (x) = ∞(𝒜),
We then define the rank of x, written rank(x), to be the least n such that 𝒢n (x) is finite,
or ∞ if no such n exists. (Hence the finite-rank vertices are those x with 𝒢 (x) = m ∈ ℕ,
whereas the infinite-rank vertices are those x with 𝒢 (x) = ∞(𝒜) for some 𝒜 ⊂ ℕ.)
Some examples of extended Sprague–Grundy values can be found in Figure 18.2.
Theorem 2.3.
(a) 𝒢 (x) = 0 if and only if 𝒪(x) = 𝒫 .
(b) If 𝒢 (x) is a positive integer, then 𝒪(x) = 𝒩 .
(c) If 𝒢 (x) = ∞(𝒜) for a set 𝒜 with 0 ∈ 𝒜, then 𝒪(x) = 𝒩 .
(d) 𝒢 (x) = ∞(𝒜) for some 𝒜 with 0 ∉ 𝒜 if and only if 𝒪(x) = 𝒟.
Theorem 2.3 tells us that the Sprague–Grundy value of a position determines its
outcome class. In fact, much more is true: the Sprague–Grundy values of two games
determines the Sprague–Grundy value, and hence the outcome class, of their sum.
The algebra of the Sprague–Grundy values is the same as in the case of finite loopy
graphs, and full details can be found at the end of Section IV.4 of [12]. Again the proofs
carry over unchanged to the locally finite setting. We note a few particular conse-
quences.
Corollary 2.5. Every locally finite game is equivalent to some finite game.
Proposition 2.6. Let V be a locally finite graph, and let x ∈ V. Then the following are
equivalent:
(a) rank(x) ≤ n, and 𝒢 (x) = m;
(b) the following two properties hold:
(i) for each i such that 0 ≤ i ≤ m − 1, there exists yi ∈ Γ(x) such that rank(y) < n
and 𝒢 (yi ) = i;
(ii) for all y ∈ Γ(x), either rank(y) < n and 𝒢 (y) < m, or there is z ∈ Γ(y) with
rank(z) < n and 𝒢 (z) = m.
3 Reduced graphs
Let k ≥ 0. We will say that a locally finite directed graph V is k-stable if whenever x ∈ V
has infinite rank (i. e., whenever 𝒢 ((V, x)) = ∞(𝒜) for some 𝒜), then {0, 1, . . . , k} ⊆ 𝒜.
Note that by Theorem 2.3(d), being 0-stable is equivalent to being draw-free: every
position of V has a winning strategy either for the first player or for the second player.
Let 𝒫V be the set of 𝒫 -positions of the graph V, in other words those x ∈ V with
𝒢 ((V, x)) = 0. Consider the graph R(V) := V \ 𝒫V , which results from removing the
𝒫 -positions from V (and retaining all arcs between remaining vertices). More gen-
erally, for k ≥ 1, let Rk (V) be the graph resulting from removing all vertices x with
𝒢 ((V, x)) < k.
Theorem 3.1. Let V be a locally finite directed graph, and let x ∈ R(V).
(a) If x has finite rank in V, then also x has finite rank in R(V); specifically,
(b) Suppose additionally that V is draw-free. If x has infinite rank in V, then also x has
infinite rank in R(V); specifically, if 𝒢 ((V, x)) = ∞(𝒜) for some 𝒜 (in which case
necessarily 0 ∈ 𝒜), then
If V is not draw-free, then the conclusion of part (b) may fail; removing the
𝒫 -positions may convert infinite-rank vertices to finite-rank vertices (either 𝒫 -posi-
tions or finite-rank 𝒩 -positions). See Figure 18.2 for an example.
Figure 18.2: The conclusion of Theorem 3.1(b) may fail when the graph is not draw-free. Here, remov-
ing the unique 𝒫-position e from the graph on the left, to give the graph on the right, converts the
position a from infinite rank to finite rank. The extended Sprague–Grundy values are shown by the
nodes in red.
Proof of Theorem 3.1. (a) For the first part, we use induction on the rank of x in V. We
claim that if x ∈ R(V) has rank((V, x)) = n and 𝒢 ((V, x)) = m > 0, then rank((R(V), x)) ≤
n and 𝒢 ((R(V), x)) = m − 1.
Any x with rank 0 in V is in 𝒫V and hence is not a vertex of R(V), so the claim
holds vacuously for x with rank((V, x)) = 0.
Now for n > 0, suppose the claim holds for all x with rank((V, x)) < n, and consider
x ∈ R(V) with rank((V, x)) = n and 𝒢 ((V, x)) = m.
From Proposition 2.6 we have the following properties:
(i) for each i = 0, . . . , m − 1, there exists yi ∈ ΓV (x) such that rank((V, yi )) < n and
𝒢 ((V, yi )) = i;
(ii) for all y ∈ ΓV (x), either rank((V, y)) < n and 𝒢 ((V, y)) < m, or there is z ∈ ΓV (y)
with rank((V, z)) < n and 𝒢 ((V, z)) = m.
Using Proposition 2.6 again, we conclude that rank((R(V), x)) ≤ n and 𝒢 ((R(V), x)) =
m − 1, completing the induction step.
(b) Now we suppose that in addition V is draw-free. We first want to show that
if x has finite rank in R(V), then it also has finite rank in V. In this case, we work by
induction on the rank of x in R(V).
If x has rank 0 in R(V), (i. e., if x is terminal in R(V)), then all options of x in V are
in 𝒫V (i. e., 𝒢 ((V, y)) = 0), which gives 𝒢 ((V, x)) = 1.
Now let n > 1. Assume that any vertex with rank less than n in R(V) has finite rank
in V, and consider any vertex x with rank n in R(V), say 𝒢n ((R(V), x)) = m.
Then by Proposition 2.6 again,
Extended Sprague–Grundy theory for locally finite games | 353
(i) There are y0 , y1 , . . . , ym−1 ∈ ΓR(V) (x) such that for each i, rank((R(V), yi )) < n and
𝒢 ((R(V), yi )) = i. Then by the induction hypothesis, rank((V, yi )) < ∞, and part
(a) gives 𝒢 ((V, yi )) = i + 1.
(ii) For all y ∈ ΓR(V) (x), either rank((R(V), y)) < n and 𝒢 ((R(V), y)) < m, or there is z ∈
ΓR(V) (y) such that rank((R(V), z)) < n and 𝒢 ((R(V), z)) = m. Then by the induction
hypothesis and part (a) again, either 𝒢 (V, y) < m + 1, or there is such a z with
𝒢 (V, z) = m + 1.
Now consider two possibilities. First, suppose there is y ∈ ΓV (x) with 𝒢 ((V, y)) = 0.
Then for some large enough n′ , we get 𝒢n′ ((V, x)) = m + 1, and indeed x has finite rank
in V. Alternatively, there is no such y. Then if x had infinite rank in V, then we would
have 𝒢 ((V, x)) = ∞(𝒜) for some 𝒜 with 0 ∉ 𝒜. This would contradict the assumption
that V is draw-free. Hence again x must have finite rank in V, as required.
Note that the conclusion of part (b) can fail even for graphs that are acyclic in
the sense of having no directed cycles. See Figure 18.3 for an example. (The method
of proof below makes clear that the result does extend to bipartite graphs with no
directed cycles.)
Figure 18.3: An example showing that the conclusion of Proposition 3.3(b) can fail even for “loop-
free” graphs (i. e., graphs with no directed cycle). The directed graph with vertex set {ai , i ∈
ℕ} ∪ {bi , i ∈ ℕ} and arcs from ai to bi , from ai to ai+1 , and from bi to bi+1 for each i. There are two
mex labelings, one shown in red above the vertices and the other shown in blue below the vertices.
Every position has Sprague–Grundy value ∞(0), and the graph is not 0-stable. However, the posi-
tions bi have value 0 in both mex labelings, whereas the positions ai have nonzero values in both
mex labelings.
Proof. We start by proving that if x ∈ V has finite rank with 𝒢 (x) = m, then f (x) = m
for all mex labelings f of V. (This holds for any locally finite directed graph V.)
We proceed by induction on rank(x). Let f be any mex labeling of V.
If rank(x) = 0, then x has no options. Then 𝒢 (x) = 0, and so f (x) = mex(0) = 0.
Now suppose that rank(x) = n > 0, 𝒢 (x) = m, and the statement holds for all
vertices of rank less than n.
From Proposition 2.6, for each i with 0 ≤ i ≤ m − 1, there exists yi ∈ Γ(x) with
𝒢 (yi ) = i and rank(yi ) < n. Hence f (yi ) = i.
Also, for every y ∈ Γ(x) with 𝒢 (y) ≥ m, there is z ∈ Γ(y) with rank(z) < n and
𝒢 (z) = m. Then f (z) = m, and hence f (y) ≠ m.
Thus x has options on which f takes value 0, 1, . . . , m − 1, but no option on which f
takes value m. This gives f (x) = m as required.
To complete the proof of part (a), suppose that V is k-stable, and let f be any mex
labeling of V. Then any vertex x with infinite rank has 𝒢 (x) = ∞(𝒜) for some 𝒜 with
k ∈ 𝒜. Hence there exists y ∈ Γ(x) with 𝒢 (y) = k, giving f (y) = k. Then f (x) ≠ k. So,
indeed, the set of vertices x with f (x) = k is exactly the set of x with 𝒢 (x) = k.
We turn to part (b), starting with the case k = 0. Suppose that V is a locally fi-
nite tree that is not 0-stable. Let x be any vertex with 𝒢 (x) = ∞(𝒜) for some 𝒜 not
containing 0 (i. e., x ∈ 𝒟).
Take any n ≥ height(x). Since the game from position x is drawn, if we consider the
game on the truncated graph Vn described just before the statement of the proposition,
so that all vertices at height n become terminal, then position x becomes a first-player
win if n − height(x) is odd, and a second-player win if n − height(x) is even.
Extended Sprague–Grundy theory for locally finite games | 355
Then we can apply again the compactness argument mentioned before the state-
ment of Proposition 3.3, separately for odd n and even n. This yields two mex labelings
f and f ′ , one of which gives value 0 to x, and the other of which gives a strictly positive
value to x, as required. This completes the proof of part (b) in the case k = 0.
Now we extend to k > 0. Suppose V is (k − 1)-stable but not k-stable. As in Corol-
lary 3.2, we can apply the reduction operator k−1 times, removing all the vertices y ∈ V
with 𝒢 ((V, y)) < k to arrive at the graph Rk (V).
Any v ∈ ℝk (V) either has 𝒢 ((V, x)) = m for some finite m ≥ k, or 𝒢 ((V, x)) = ∞(𝒜)
for some 𝒜 with {0, . . . , k − 1} ⊆ 𝒜. It is then easy to check that whenever f ̂ : Rk (V) → ℕ
is a mex labeling of Rk (V), we can obtain a mex labeling f : V → ℕ of V by defining
Let x ∈ V with 𝒢 (V, x) = ∞(𝒜) for some 𝒜 containing 0, . . . , k − 1 but not k. Then
by applying Theorem 3.1 k times we have x ∈ Rk (V) and 𝒢 ((Rk (V), x)) = ∞(ℬ) where
ℬ = 𝒜 − k. In particular, 0 ∉ ℬ (i. e., the position x in Rk (V) is a draw). We wish to show
that there are mex labelings f , f ′ of V such that f (x) = k and f ′ (x) ≠ k. In light of (18.1),
it is enough to show that there are mex labelings f ̂ and f ̂′ of Rk (V) such that f ̂(x) = 0
and f ̂′ (x) > 0.
Since x is a draw in Rk (V), we would like to use the same approach as in the
case k = 0. The situation is more complicated since the graph Rk (V) may not be con-
nected. However, the graph Rk (V) is a union of finitely or countably many disjoint
trees. Any labeling that restricts to a mex labeling of each tree component is a mex
labeling of the whole graph. So it suffices to find mex labelings of the tree component
of Rk (V) that contains x, one of which assigns value 0 to x, and another of which as-
signs strictly positive value to x. This indeed can be done using the same compactness
argument used in the case k = 0.
This completes the proof of part (b).
4 Random game-trees
4.1 Galton–Watson trees
A Galton–Watson (or Bienaymé) branching process is constructed as follows. We fix
some offspring distribution, which is a probability distribution p = (pk , k ∈ ℕ) on the
nonnegative integers. The process begins with a single individual, called the root. The
root individual has a random number of children distributed according to the offspring
distribution, which form generation 1. Then each of the members of generation 1 has a
number of children according to the offspring distribution, forming generation 2, and
356 | J. B. Martin
so on. All family sizes are independent. See, for example, [6] for a basic introduction,
and [11] for much more depth including a rigorous construction.
We derive a directed graph from the process by regarding each individual as a ver-
tex and putting an arc to each child from its parent. In this way, each vertex of the
graph has in-degree 1, except for the root which has in-degree 0. We call the resulting
graph a Galton–Watson tree. This tree has a natural self-similarity property: condi-
tional on the number of the children of the root being k, the subtrees rooted at those
children are independent, and each one has the distribution of the original Galton–
Watson tree.
We assume always that p0 > 0, so that the tree can have terminal vertices.
A key role in what follows will be played by the probability generating function of
the offspring distribution, defined by
ϕ(s) = ∑ pk sk .
k≥0
The function ϕ is strictly increasing on the interval [0, 1], and maps [0, 1] bijectively to
the interval [p0 , 1].
A fundamental result is a criterion for the tree to be infinite in terms of the mean
μ = ∑k≥0 kpk = ϕ′ (1) of the offspring distribution p. Excluding the trivial case p1 = 1
(where with probability 1 the tree consists of a single path), we have that whenever
μ ≤ 1, the tree is finite with probability 1, and whenever μ > 1, there is positive proba-
bility for the tree to be infinite.
If d = sup{k : pk > 0} is finite, then we say that the offspring distribution has max-
imum out-degree d. Otherwise we say that the offspring distribution has unbounded
vertex degrees.
Lemma 4.1. Consider a Galton–Watson tree T with root o. Let 𝒞 be any set of possible
Sprague–Grundy values. The following are equivalent:
(a) ℙ(𝒢 ((T, o)) ∈ 𝒞 ) > 0;
(b) ℙ(𝒢 ((T, u)) ∈ 𝒞 for some u ∈ T) > 0.
For example, the tree T is draw-free with probability 1 if and only if the probability
that the root is drawn is 0. So we do not need to distinguish carefully between saying
that “the tree has draws with positive probability” and that “the root is drawn with
Extended Sprague–Grundy theory for locally finite games | 357
positive probability”. More generally, the tree T is k-stable with probability 1 if and
only if the probability that 𝒢 (T, o) = ∞(𝒜) for some 𝒜 not containing {0, 1, . . . , k} is 0.
Proof of Lemma 4.1. Trivially, (a) implies (b). On the other hand, if (a) fails, so that
ℙ(𝒢 (T, o)) ∈ 𝒞 = 0, then the self-similarity of the Galton–Watson tree, the fact that the
tree has at most countably many vertices and the countable additivity of probability
measures combine to give that also ℙ(𝒢 ((T, u)) ∈ 𝒞 for some u ∈ T) = 0, so that (b)
also fails.
Now let P be the probability that o ∈ 𝒫 . We have P = limn→∞ Pn . Taking limits in (18.2)
and using the fact that the generating function ϕ is continuous and increasing on
[0, 1], we obtain part (a) of the following result. A similar approach involving the
probability of winning strategies for the first player gives part (b). For full details,
see [9].
Note that h defined in (18.3) is the second iteration of the function 1 − ϕ. The func-
tion 1 − ϕ is continuous and strictly decreasing, mapping [0, 1] to [1 − p0 , 0]. It follows
that 1 − ϕ has precisely one fixed point in [0, 1] and that this fixed point is also a fixed
point of h. So Corollary 4.3 tells us that the game has positive probability of draws if
and only if h has further fixed points which are not fixed points of 1 − ϕ.
Two particular families of offspring distributions had been considered earlier. The
Binomial(2, p) case was studied by Holroyd [8]. The case of the Poisson offspring family
is closely related to the analysis of the Karp–Sipser algorithm used to find large match-
ings or independent sets of a graph, which was introduced by Karp and Sipser [10];
358 | J. B. Martin
Figure 18.4: An illustration of the phase transition from the nondraw to the draw region for
Poisson(λ) offspring distributions (see Example 4.4). The two plots show the function h(s) − s for
s ∈ [0, 1], where h is defined by (18.3). The fixed points of h are those s where h(s) − s crosses the
horizontal axis. On the left, λ = 2.7, just below the critical point λ = e; the function h has just one
fixed point. On the right, λ = 2.8, just above the critical point; now h has three fixed points.
the link to games is not described explicitly in that paper, but the choice of notation
and terminology makes clear that the authors were aware of it.
One particular focus of [9] was on the nature of the phase transitions between
the set of offspring distributions without draws and the set of offspring distributions
with positive probability of draws. This transition can be either continuous or dis-
continuous. Without going into precise details, we illustrate with a couple of exam-
ples.
Figure 18.5: The phase transition for the family of offspring distributions given in Example 4.5 with
p0 = 1 − a, p2 = a/2, and p10 = a/2. Again, the function h(s) − s is shown for s ∈ [0, 1]. On the left,
a = 0.977, and on the right, a = 0.979 ≈ ac . Unlike in Figure 18.4, at the critical point, there are
already multiple fixed points of h; at ac , the draw probability jumps from 0 to a positive value around
0.681, which is the distance between the minimum and maximum fixed points of h.
1
ϕ(1) (s) = [ϕ(P + s(1 − P)) − ϕ(s(1 − P))]. (18.4)
1−P
Proof. Since we assume that T has no draws, each vertex of T is either a 𝒫 -position or
an 𝒩 -position. The type of a vertex is determined by the subtree rooted at that vertex.
Conditionally on the number of children of the root, the subtrees rooted at each child
are independent, and each has the same distribution as the original tree. In particular,
each child is independently a 𝒫 -position with probability P and an 𝒩 -position with
probability 1 − P.
360 | J. B. Martin
m+k k
pm+k ( ) P (1 − P)m .
k
We can sum over k ≥ 1 to obtain the probability that the root is of type 𝒩 and has m
𝒩 -children. Finally, we can condition on the event that the root has type 𝒩 (which
has probability 1 − P) to obtain that the conditional probability that the root has m
𝒩 -children given that it has type 𝒩 is
1 ∞ m+k k
p(1)
m := ∑p ( ) P (1 − P)m .
1 − P k=1 m+k k
Finally, we want to calculate the probability generating function ϕ(1) (s) := ∑m≥0 sm p(1)
m
of this distribution. This can easily be done using the binomial theorem to arrive at the
form given in (18.4).
Combining Corollary 3.2 and Theorem 4.6 is the key to studying the infinite-rank
vertices of our Galton–Watson tree T; see the strategy described at the beginning of
Section 5.
We finish this section with a result about the possible infinite Sprague–Grundy
values that can occur in a Galton–Watson game. Essentially, the value ∞(𝒜) has posi-
tive probability to appear for every finite 𝒜 that is not ruled out either by k-stability or
by finite maximum vertex degree. Most notably, part (a)(i) says that for a tree that has
draws and for which the offspring distribution has infinite support, all finite 𝒜 have
positive probabilities.
(ii) If the maximum out-degree is d, then there is positive probability that 𝒢 (o) =
∞(𝒜) if and only if {0, 1, . . . , k − 1} ⊆ 𝒜 ⊂ {0, 1, . . . , d} with |𝒜| ≤ d − 1.
Proof. First, we note that all finite Sprague–Grundy values have positive probabilities,
up to the maximum out-degree d if there is one. This is easy by induction. We know that
value 0 is possible since any terminal position has value 0. If values 0, 1, . . . , k − 1 are
possible, and if it is possible for the root to have degree k or larger, then there is positive
probability that the set of values of the children of the root is precisely {0, 1, . . . , k − 1},
giving value k to the root as required.
Now for part (a), since draws are possible, the value ∞(ℬ) has positive probabil-
ity for some ℬ not containing 0. In that case, there is positive probability for all the
children of the root to have value ∞(ℬ), and then the root has value ∞(0).
So the value ∞(0) has positive probability. Now if 𝒜 is any finite set such that
the number of children of the root can be as large as |𝒜| + 1, then there is positive
probability that the set of values of the children of the root is precisely 𝒜 ∪ {∞(0)}, and
in that case the value of the root is ∞(𝒜) as required.
Finally, if |𝒜| is greater than or equal to the maximum degree, then the value ∞(𝒜)
is impossible, since any vertex with such a value must have at least one child with
value m for each m ∈ 𝒜, and additionally at least one child with infinite rank.
We can derive the result for part (b) by applying part (a) to the Galton–Watson
tree T (k) obtained by conditioning the root to have Sprague–Grundy value not in
{0, 1, . . . , k − 1} and removing all the vertices with values {0, 1, . . . , k − 1} from the graph,
as described above. Theorem 3.1 tells us that if the resulting tree has positive proba-
bility to have a node with value ∞(𝒜), then the original tree has positive probability
to have a node with value ∞(ℬ) where ℬ = {b ≥ k : b − k ∈ 𝒜} ∪ {0, 1, . . . , k − 1}, and
the desired results follow.
5 Examples
First, we lay out how to use the results of the previous sections to address the question
of which infinite-rank Sprague–Grundy values have positive probability for a given
Galton–Watson tree T.
362 | J. B. Martin
Example 5.1 (Poisson case, continued). Galton–Watson trees with Poisson offspring
distribution behave particularly nicely under the graph reduction operation. This al-
lows us to give a complete analysis of the Poisson case without any need for calcula-
tions or numerical approximation.
The tree has positive probability to be infinite precisely when λ > 1. We already
saw in Example 4.4 that there is positive probability of a draw precisely when λ > e.
Suppose we are in the case λ ≤ e without draws. So each node is a 𝒫 -node (with
probability P) or an 𝒩 -node (with probability 1 − P).
By basic properties of the Poisson distribution the number of 𝒫 -children of
the root is Poisson(λP)-distributed, and the number of 𝒩 -children of the root is
Poisson(λ(1 − P))-distributed, and the two are independent.
If we condition the root to have at least one 𝒫 -child and then remove all its
𝒫 -children, then because of the independence of the number of 𝒫 -children and the
number of 𝒩 -children, we are simply left with a Poisson(λ(1 − P)) number of chil-
dren.
So we again have a Poisson Galton–Watson tree, but now with a new parameter
λ < λ. Since λ(1) < e, the new tree is still draw-free with probability 1.
(1)
Hence, to adapt the terminology of the introduction, in the Poisson case, we may
see a “blatantly infinite” game once λ > e, but for λ ≤ e, we are at worst “latently infi-
Extended Sprague–Grundy theory for locally finite games | 363
nite”. There is no λ that gives “patently infinite” behavior, whereby draws are absent,
but infinite rank vertices have positive probability.
Example 5.2 (Degrees 0 and 4). We return to the example in the introduction, where
all out-degrees are 0 or 4. We have p4 = p and p0 = 1 − p for some p ∈ (0, 1).
If p ≤ a0 := 1/4, then the mean offspring size is less than or equal to 1, and the tree
is finite with probability 1.
We can show algebraically that there is positive probability of a draw if and only
if p > a2 := 53/4 ≈ 0.83593. Namely, we can obtain that the function h defined in (18.3)
has derivative less than 1 on [0, 1] for all p ≤ a2 (except for a single point in the case
p = a2 ), and so r has just one fixed point for such p. Meanwhile, for p > a2 , there is a
fixed point s∗ of the function 1 − ϕ for which h′ (s∗ ) > 1, and this can be used to show
that r has at least two further fixed points. Corollary 4.3 then gives the result.
Between a0 and a2 there exist no draws, but the tree is infinite with positive prob-
ability, so we may ask whether there can exist positions with infinite rank.
Numerically, we observe a phase transition around the point a1 ≈ 0.52198. For
p ≤ a1 , we know that the tree T has zero probability of a draw, and we observe that
the same is also true for the trees T (1) and T (2) (their maximum out-degrees are 3 and
2, respectively, so their generating functions ϕ(1) and ϕ(2) are cubic and quadratic,
respectively. The tree T (3) has vertices of out-degrees only 0 and 1, and will also be
finite with probability 1, so we do not need to examine T (k) for any higher k).
Hence for p ∈ (a0 , a1 ], we have the “latently infinite” phase where all Sprague–
Grundy values are finite with probability 1.
However, for p ∈ (a1 , a2 ], we observe that the function h(1) (s) := 1 − ϕ(1) (1 − ϕ(1) (s))
has more than one fixed point. Consequently, there is positive probability of a draw
in the tree T (1) . The tree T has positive probability not to be 1-stable and so to have
positions of infinite rank.
The behavior of h, h(1) , and h(2) around the phase transition point p = a1 is shown
in Figure 18.6. Although the precise nature and location of this phase transition is only
found numerically, it is not hard to show rigorously that for p just above a0 , the func-
tions h(1) and h(2) have only one fixed point, whereas for p just below a2 , the function
h(1) has more than one fixed point, so that the family of distributions does display
all four of the “finite”, “latently infinite”, “patently infinite”, and “blatantly infinite”
types of behavior.
Example 5.3 (Geometric case). We now consider the family of geometric offspring dis-
tributions with pk = qk (1 − q) for k = 0, 1, 2, . . . , for some q ∈ (0, 1).
Rather surprisingly, there is no q for which draws have positive probability! See,
for example, Proposition 3(iii) of [9]. (This shows, for example, that the property of
having positive probability of draws is not monotone in the offspring distribution. If
we take any λ > e, then as discussed above, the Poisson(λ) distribution has positive
probability of draws, but for q sufficiently large, this distribution is stochastically dom-
inated by the Geometric(q) distribution, which has no draws.)
364 | J. B. Martin
Figure 18.6: The case of the 0-or-4 distribution from Example 5.2 with p = 0.52198 ≈ a1 . From left to
right the three graphs show the functions h(s) − s, h(1) (s) − s, and h(2) (s) − s. As p moves through
the critical point a1 , the function h(1) acquires multiple fixed points. For p ≤ a1 , the tree has only
finite-rank vertices. For p > a1 , the tree no longer has probability 1 to be 1-stable, and, for example,
the Sprague–Grundy value ∞(0) has positive probability.
Figure 18.7: The geometric case of Example 5.3 with q = 0.91 ∈ [q2 , q3 ). As in Figure 18.6, we plot the
functions h(s)−s, h(1) (s)−s, and h(2) (s)−s. The functions h and h(1) have unique fixed points, but the
function h(2) has multiple fixed points; so the tree has probability 1 to be 1-stable but has probability
less than 1 of being 2-stable.
However, other interesting phase transitions for the geometric family do occur. Numer-
ically, we observe that there are critical values q0 = 1/2, q1 ≈ 0.88578, q2 ≈ 0.88956,
and q3 ≈ 0.923077 such that the following hold.
– For q ≤ 0.5, the tree is finite with probability 1.
– For q ∈ (0.5, q1 ], there are infinite paths with positive probability, but the tree
is 3-stable with probability 1. In fact, for q sufficiently close to 0.5, the tree T (1)
is finite with probability 1, and so in fact the tree is k-stable for all k, that is, all
positions have finite rank (the latently infinite phase). It seems plausible that in
fact the latently infinite phase continues all the way to q1 , but we do not know
how to demonstrate that.
– For q ∈ (q1 , q2 ], with positive probability the tree is not 3-stable; however, it con-
tinues to be 2-stable.
– For q ∈ (q2 , q3 ], with positive probability the tree is not 2-stable; however, it con-
tinues to be 1-stable (see Figure 18.7).
– For q ≥ q3 , with positive probability the tree is not 1-stable (but as we know, it
continues to be 0-stable or, in other words, draw-free for all q).
Except for the transition at q0 , the precise nature and location of all the phase transi-
tions above are only found numerically. However, with a sufficiently precise analysis,
Extended Sprague–Grundy theory for locally finite games | 365
we could rigorously establish in each case a smaller interval on which the claimed
behavior holds (for example, we could find some subinterval of the claimed interval
(q2 , q3 ) on which h(1) has only one fixed point, whereas h(2) has more than one fixed
point).
In summary, the three families in Examples 5.1–5.3 show a wide variety of behav-
iors. In the Poisson case, we have the existence of draws whenever we have the exis-
tence of positions with infinite rank. In the 0-or-4 case, there is additionally a phase
with infinite rank vertices but no draws. In the geometric case, it is the phase with
draws that is missing; however, we see additional phase transitions losing 3-stability,
2-stability, and 1-stability step by step as the parameter increases.
We end with a question.
Question 5.4. Does there exist for every k ∈ ℕ an offspring distribution for which the
Galton–Watson tree is k-stable with probability 1, but nonetheless infinite rank posi-
tions exist with positive probability? Numerical explorations have so far only produced
examples up to k = 2 (for example, the Geometric(q) case with q ∈ (q1 , q2 ] described
above).
Bibliography
[1] E. R. Berlekamp, J. H. Conway, and R. K. Guy, Winning Ways for Your Mathematical Plays,
Volume 2, 2nd edition, CRC Press, 2003.
[2] A. S. Fraenkel and Y. Perl, Constructions in combinatorial games with cycles, in Infinite and
Finite Sets (to Paul Erdős on his 60th birthday), Volume 2, ed. A. Hajnal, R. Rado, and V. Sós,
pp. 667–699, North-Holland, 1975.
[3] A. S. Fraenkel and O. Rahat, Infinite cyclic impartial games, Theor. Comp. Sci. 252 (2001),
13–22.
[4] A. S. Fraenkel and U. Tassa, Strategy for a class of games with dynamic ties, Comput. Math.
Appl. 1 (1975), 237–254.
[5] A. S. Fraenkel and Y. Yesha, The generalized Sprague–Grundy function and its invariance under
certain mappings, J. Comb. Thy. A 43 (1986), 165–177.
[6] G. Grimmett and D. Welsh, Probability: an Introduction, 2nd edition, Oxford University Press,
Oxford, 2014.
[7] R. K. Guy and C. A. B. Smith, The G-values of various games, Math. Proc. Camb. Phil. Soc. 52
(1956), 514–526.
[8] A. E. Holroyd, Percolation Beyond Connectivity, PhD thesis, Univ. Cambridge, 2000.
[9] A. E. Holroyd and J. B. Martin, Galton–Watson games, Random Structures & Algorithms, 59(4)
(2021), 495–521. DOI 10.1002/rsa.21008.
[10] R. M. Karp and M. Sipser, Maximum matching in sparse random graphs, in 22nd Annual
Symposium on Foundations of Computer Science, pp. 364–375, IEEE, 1981.
[11] J.-F. Le Gall, Random trees and applications, Probab. Surveys 2 (2005), 245–311.
[12] A. N. Siegel, Combinatorial Game Theory, Amer. Math. Soc., Providence, Rhode Island, 2013.
[13] C. A. Smith, Graphs and composite games, J. Comb. Thy. 1 (1966), 51–81.
Ryohei Miyadera and Yushi Nakaya
Grundy numbers of impartial
three-dimensional chocolate-bar games
Abstract: Chocolate-bar games are variants of the Chomp game. Let Z≥0 be a set of
nonnegative numbers, and let x, y, z ∈ Z≥0 . A three-dimensional chocolate bar is com-
prised of a set of 1 × 1 × 1 cubes with a “bitter” or “poison” cube at the bottom of the
column at position (0, 0). For u, w ∈ Z≥0 such that u ≤ x and w ≤ z, the height of
the column at position (u, w) is min(F(u, w), y) + 1, where F is an increasing function.
We denote such a chocolate bar as CB(F, x, y, z). Two players take turns to cut the bar
along a plane horizontally or vertically along the grooves, and eat the broken pieces.
The player who manages to leave the opponent with a single bitter cube is the win-
ner. In a prior work, we characterized the function f for a two-dimensional chocolate-
bar game such that the Sprague–Grundy value of CB(f , y, z) is y ⊕ z. In this study,
we characterize the function F such that the Sprague–Grundy value of CB(F, x, y, z) is
x ⊕ y ⊕ z.
1 Introduction
Chocolate-bar games are variants of the Chomp game. A two-dimensional chocolate
bar is a rectangular array of squares in which some squares are removed throughout
the course of the game. A “poisoned” or “bitter” square, typically printed in black, is
included in some part of the bar. Figure 19.1 shows an example of a two-dimensional
chocolate bar. Each player takes turns breaking the bar in a straight line along the
grooves and then “eats” a broken piece. The player who manages to leave the oppo-
nent with a single bitter (black) block wins the game.
A three-dimensional chocolate bar is a three-dimensional array of cubes in which
a poisoned cube printed in black is included in some part of the bar. Figure 19.2 shows
an example of a three-dimensional chocolate bar.
Each player takes turns dividing the bar along a plane that is horizontal or vertical
along the grooves, and then eats a broken piece. The player who manages to leave the
Acknowledgement: The authors would like to express their thanks to the anonymous referee whose
time and patience improved the quality of this work. We also thank the Integers staff for their valuable
time.
Ryohei Miyadera, Kwansei Gakuin High School, Nishinomiya City, Japan, e-mail:
runnerskg@gmail.com
Yushi Nakaya, School of Engineering, Tohoku University, Sendai City, Japan, e-mail:
math271k@gmail.com
https://doi.org/10.1515/9783110755411-019
368 | R. Miyadera and Y. Nakaya
opponent with a single bitter cube wins the game. Examples of cut chocolate bars are
shown in Figures 19.3, 19.4, and 19.5.
Example 1.2. There are three ways to cut a three-dimensional chocolate bar.
(i) Vertical cut.
Example 1.3. Here, we provide an example of the traditional Nim game and two ex-
amples of chocolate bars.
370 | R. Miyadera and Y. Nakaya
In this context, it is natural to search for a necessary and sufficient condition wherein a
chocolate bar may have a Grundy number calculated using the Nim-sum as the length,
height, and width of the bar.
We have previously presented the necessary and sufficient condition for a two-
dimensional chocolate bar in [2].
This paper aims to answer the following question.
Question. What is the necessary and sufficient condition under which a three-
dimensional chocolate bar may have the Grundy number (x − 1) ⊕ (y − 1) ⊕ (z − 1), where
x, y, and z are the length, height, and width of the bar, respectively?
two-dimensional chocolate bar presented in [2]. However, the proof of the necessary
condition for a three-dimensional chocolate bar is more difficult to obtain.
n
x ⊕ y = ∑ wi 2i , (19.1)
i=0
Proof. If x ⊕ y = x ⊕ z, then y = x ⊕ x ⊕ y = x ⊕ x ⊕ z = z.
As chocolate-bar games are impartial and without draws, only two outcome
classes are possible.
Definition 2.3. The disjunctive sum of the two games, denoted by G+H, is a supergame
in which a player may move either in G or H, but not in both.
Definition 2.4. For any position p of game G, there exists a set of positions that can
be reached in precisely one move in G, which we denote as move(p).
Remark 2.1. Note that Examples 3.1 and 3.2 are examples of a move.
Definition 2.5. (i) The minimum excluded value (mex) of a set S of nonnegative inte-
gers is the least nonnegative integer that is not in S.
(ii) Let p be a position in an impartial game. The associated Grundy number is denoted
as G(p) and is recursively defined as G(p) = mex{G(h) : h ∈ move(p)}.
372 | R. Miyadera and Y. Nakaya
Lemma 2. Let S be a set of nonnegative integers, and let mex(S) = m for some m ∈ Z≥0 .
Then {k : k < m and k ∈ Z≥0 } ⊂ S.
Lemma 3. If G(p) > x for some x ∈ Z≥0 , then there exists h ∈ move(p) such that
G(h) = x.
The next result demonstrates the usefulness of the Sprague–Grundy theory in im-
partial games.
Theorem 1. Let G and H be impartial rulesets, and let GG and GH be, respectively, the
Grundy numbers of game g played under the rules of G and game h played under the
rules of H. Then the following conditions hold.
(i) For any position g of G, GG (g) = 0 if and only if g is a 𝒫 -position.
(ii) The Grundy number of position {g, h} in game G + H is GG (g) ⊕ GH (h).
Definition 3.1. A function f of Z≥0 into itself is said to be increasing if f (u) ≤ f (v) for
u, v ∈ Z≥0 such that u ≤ v.
Definition 3.2. Let f be an increasing function defined by Definition 3.1. For y, z ∈ Z≥0 ,
the chocolate bar has z + 1 columns, where the 0th column is the bitter square, and
the height of the ith column is t(i) = min(f (i), y) + 1 for i = 0, 1, . . . , z, which is denoted
as CB(f , y, z).
Thus the height of the ith column is determined by the value of min(f (i), y) + 1,
which is determined by f , i, and y.
Definition 3.3. Each player takes turns breaking the bar in a straight line along the
grooves into two pieces and eats the piece without the bitter part. The player who
breaks the chocolate bar and eats it, leaving the opponent with a single bitter block
(black block), is the winner.
We define a function f for a chocolate bar CB(f , y, z) and denote y, z as the coordi-
nates of CB(f , y, z).
Example 3.1. Let f (t) = ⌊ 2t ⌋, where = ⌊ ⌋ is the floor function. Here we present examples
of CB(f , y, z)-type chocolate bars. Note that the function f defines the shape of the bar,
and the two coordinates y and z represent the numbers of grooves above and to the
right of the bitter square, respectively.
For a fixed function f , we define movef for each position {y, z} of the chocolate bar
CB(f , y, z). The set movef ({y, z}) is comprised of the positions of the chocolate bar ob-
tained by cutting the chocolate bar CB(f , y, z) once, and movef represents a special
case of move defined by Definition 2.4.
Definition 3.4. For y, z ∈ Z≥0 , we define movef ({y, z}) = {{v, z} : v < y}∪{{min(y, f (w)), w} :
w < z}, where v, w ∈ Z≥0 .
Remark 3.1. For a fixed function f , we use move({y, z}) instead of movef ({y, z}) for con-
venience.
Example 3.2. Here we elucidate movef for f (t) = ⌊ 2t ⌋. If we begin with position {y, z} =
{2, 5} in Figure 19.9 and reduce z = 5 to z = 3, then the y-coordinate (first coordinate)
becomes min(2, ⌊3/2⌋) = min(2, 1) = 1.
Therefore we have {1, 3} ∈ movef ({2, 5}), that is, we obtain {1, 3} in Figure 19.11
by cutting {2, 5}. We can easily determine that {1, 5}, {0, 5} ∈ movef ({2, 5}), {1, 3} ∈
movef ({1, 5}), and {0, 5} ∉ movef ({1, 3}). See Figures 19.9, 19.10, 19.11, and 19.12.
According to Definitions 2.5 and 3.4, we define the Grundy number of a two-
dimensional chocolate bar.
Definition 3.5. For y, z ∈ Z≥0 , we define 𝒢 ({y, z}) = mex({𝒢 ({v, z}) : v < y, v ∈ Z≥0 } ∪
{𝒢 ({min(y, f (w)), w}) : w < z, w ∈ Z≥0 }).
z z′
⌊ i⌋ = ⌊ i⌋
2 2
h(z) h(z ′ )
⌊ ⌋ = ⌊ ⌋.
2i−1 2i−1
Lemma 4. Suppose that h has the NS property as per Definition 3.6 and y ≤ h(z) for
y, z ∈ Z≥0 . Let
A = {y ⊕ (z − k) : k = 1, 2, . . . , z}
and
Then A = B.
Proof. For any u, v ∈ Z≥0 with u ≤ h(v), let 𝒢h ({u, v}) be the Grundy number of
CB(h, u, v). Then by the NS property of function h and Theorem 2
y ≤ h(z). (19.3)
2n > z, y. (19.4)
y⊕i∈A (19.5)
such that
move({y, z + 2n }) = {{y − k, z + 2n } : k = 1, 2, . . . , y}
∪ {{min(y, h(z + 2n − k)), z + 2n − k} : k = 1, 2, . . . , z + 2n }.
Therefore move({y, z + 2n }) is the union of the sets given below as (19.9), (19.10), and
(19.11).
(19.10)
y ⊕ i = u ⊕ v ∈ B. (19.13)
Thus far, we have considered only two-dimensional chocolate bars for increas-
ing functions. However, we can similarly consider a three-dimensional chocolate bar
CB(f , y, z) for a function f , which is not increasing, by forming an increasing func-
tion f ′ such that chocolate bars CB(f , y, z) and CB(f ′ , y, z) have the same mathematical
structure in the context of the game.
For example, the chocolate bar in Figure 19.13 is constructed by a function that is
not increasing, whereas the chocolate bar in Figure 19.14 is formed by an increasing
function; however, these two chocolate bars have the same mathematical structure in
terms of the game.
Grundy numbers of impartial three-dimensional chocolate-bar games | 377
Definition 4.1. Suppose that F(u, v) ∈ Z≥0 for u, v ∈ Z≥0 . The function F is said to be
increasing if F(u, v) ≤ F(x, z) for x, z, u, v ∈ Z≥0 such that u ≤ x and v ≤ z.
Next, we define moveF ({x, y, z}) in Definition 4.4 as the set containing all the positions
that can be directly reached from position {x, y, z} in one step.
moveF ({x, y, z}) = {{u, min(F(u, z), y), z} : u < x} ∪ {{x, v, z} : v < y}
∪ {{x, min(y, F(x, w)), w} : w < z}, where u, v, w ∈ Z≥0 .
Grundy numbers of impartial three-dimensional chocolate-bar games | 379
For example, when F(x, z) = max(⌊ x2 ⌋, ⌊ z2 ⌋), {5, 3, 7} ∈ moveF ({7, 3, 7}) because we
obtain the chocolate bar shown in Figure 19.17 by reducing the third coordinate of the
chocolate bar in Figure 19.16 from 7 to 5.
Remark 4.1. For a fixed function f , we use move({x, y, z}) instead of moveF ({x, y, z}) for
convenience.
k ⊕ h ⊕ i = mex({(k − t) ⊕ h ⊕ i : t = 1, 2, . . . , k}
∪ {k ⊕ (h − t) ⊕ i : t = 1, 2, . . . , h} ∪ {k ⊕ h ⊕ (i − t) : t = 1, 2, . . . , i}). (19.14)
Proof. The proof is omitted because this is a well-known fact regarding Nim-sum ⊕.
See [4, Prop. 1.4, p. 81].
Theorem 3. Let F(x, z) be an increasing function. Let gn (z) = F(n, z) and hm (x) = F(x, m)
for n, m ∈ Z≥0 . If gn and hm satisfy the NS property in Definition 3.6 for any fixed n, m ∈
Z≥0 , then the Grundy number of chocolate bar CB(F, x, y, z) is
Proof. Let x, y, z ∈ Z≥0 be such that y ≤ F(x, z). We prove (19.15) by mathematical
induction and suppose that 𝒢 ({u, v, w}) = u ⊕ v ⊕ w for u, v, w ∈ Z≥0 , u ≤ x, v ≤ y, w ≤ z,
v ≤ f (u, w), with u + v + w < x + y + z.
Let
A = {x ⊕ y ⊕ (z − k) : k = 1, 2, . . . , z} (19.16)
and
A = A′ . (19.18)
Let
B = {(x − k) ⊕ y ⊕ z : k = 1, 2, . . . , x} (19.19)
and
B = B′ . (19.21)
Let
C = {x ⊕ (y − k) ⊕ z : k = 1, 2, . . . , y}. (19.22)
mex(A ∪ B ∪ C) = x ⊕ y ⊕ z. (19.25)
z z′
⌊ i⌋ = ⌊ i⌋
2 2
d × 2i ≤ z < z ′ < (d + 1) × 2i .
(b) Let
z z′
⌊ i ⌋ < ⌊ i ⌋. (19.26)
2 2
Proof. Let z = ∑ni=0 zi 2i and z ′ = ∑ni=0 zi′ 2i . (a) follows directly from the definition of the
floor function. (b) falls into two cases according to inequality (19.26).
Case (i) Suppose that z < 2n ≤ z ′ . Let c = 0 and t = z. Then we have inequality (19.27).
Case (ii) Suppose there exists s ∈ Z≥0 such that s ≥ i and zk = zk′ for k = n, n−1, . . . , s+
1 and zs = 0 < 1 = zs′ . Then there exist c, t ∈ Z≥0 satisfying inequality (19.27).
Theorem 4. Let F(x, z) be an increasing function, and let gn (z) = F(n, z) and hm (x) =
F(x, m) for n, m ∈ Z≥0 . Suppose that the Grundy number of chocolate bar CB(F, x, y, z) is
Then gn and hm satisfy the NS property in Definition 3.6 for any fixed n, m ∈ Z≥0 .
Proof. Let n ∈ Z≥0 . To prove that gn has the NS property, it suffices to show that
gn (a) g (a + 1)
⌊ ⌋ = ⌊ n j−1 ⌋
2j−1 2
a a+1
⌊ j ⌋ = ⌊ j ⌋. (19.29)
2 2
gn (a) g (a + 1)
⌊ j−1
⌋ < ⌊ n j−1 ⌋ (19.30)
2 2
for a ∈ Z≥0 satisfying equation (19.29). Here we assume that a ∈ Z≥0 is the smallest in-
teger satisfying equation (19.29) and inequality (19.30). According to inequality (19.30)
and Lemma 6(2), there exist i, c ∈ Z≥0 and t ∈ R such that i ≥ j − 1, 0 ≤ t < 2i , and
a a+1
⌊ ⌋ = ⌊ i+1 ⌋.
2i+1 2
Let g ′ (z) = min(gn (z), c × 2i+1 + t). We consider the two-dimensional chocolate bar
CB(g ′ , y, z) for z ≤ a + 1 as defined in Section 3. Let 𝒢g ′ ({y, z}) be the Grundy number
of this chocolate bar CB(g ′ , y, z). Because gn (z) is increasing, according to (19.31) for
z ≤ a,
Therefore
Because a ∈ Z≥0 is the smallest integer that satisfies equation (19.29) and inequal-
ity (19.30), g ′ (z) satisfies the NS-property for z ≤ a. According to inequality (19.31) and
the definition of g ′ ,
for y, z ∈ Z≥0 such that z ≤ a + 1 and y ≤ g ′ (z). By inequality (19.34) and equa-
tions (19.35) and (19.36)
(c × 2i+1 + 2i ) ⊕ (a + 1) ∈ {𝒢g ′ ({p, q}) : {p, q} ∈ moveg ′ ({c × 2i+1 + t, a + 1})}. (19.38)
Grundy numbers of impartial three-dimensional chocolate-bar games | 383
Based on Definition 3.4, moveg ′ ({c × 2i+1 + t, a + 1}) is a union of two sets. The first set
is created by reducing the first coordinate of point {c × 2i+1 + t, a + 1}, and the second is
created by reducing the second coordinate of point {c × 2i+1 + t, a + 1}. Therefore, based
on g ′ (w) ≤ c × 2i+1 + t, we obtain
i+1
𝒢 ({n, c × 2 + 2i , a + 1}) = n ⊕ (c × 2i+1 + 2i ) ⊕ (a + 1). (19.42)
{n ⊕ g ′ (w) ⊕ w : 0 ≤ w ≤ a}
= {n ⊕ gn (w) ⊕ w : 0 ≤ w ≤ a}
= {n ⊕ F(n, w) ⊕ w : 0 ≤ w ≤ a}
= {𝒢 ({n, F(n, w), w}) : 0 ≤ w ≤ a}. (19.44)
384 | R. Miyadera and Y. Nakaya
Therefore, based on equations (19.42) and (19.44) and relation (19.43), there exists w′
such that 0 ≤ w′ ≤ a and
i+1
𝒢 ({n, c × 2 + 2i , a + 1}) = 𝒢 ({n, F(n, w′ ), w′ }). (19.45)
hence
{n, F(n, w′ ), w′ } = {n, min(c × 2i+1 + 2i , F(n, w′ )), w′ } ∈ move({n, c × 2i+1 + 2i , a + 1}).
(19.46)
Equation (19.45) and relation (19.46) contradict the definition of the Grundy number.
Case (ii) If we have inequality (19.33), then, as 0 < e < 2i and 0 ≤ t < 2i ,
c × 2i+1 + 2i ≤ gn (a + 1)
= F(n, a + 1)
≤ F(n, a + 1 + t ⊕ (e − 1))
≤ F(n, d × 2i+1 + 2i + t ⊕ (e − 1));
hence {n, c × 2i+1 + 2i , d × 2i+1 + 2i + t ⊕ (e − 1)} is the position of chocolate bar CB(F, x, y, z).
According to (19.31),
According to (19.33),
Thus far, we have only considered three-dimensional chocolate bars for increas-
ing functions. However, we can similarly consider a chocolate bar CB(F, x, y, z) for
a function F that is not increasing by constructing an increasing function F ′ such that
position {x, y, z} of chocolate bar CB(F, x, y, z) and position {x, y, z} of CB(F ′ , x, y, z) have
the same bar length, height, and width and have the same mathematical structure as
a game.
For example, the chocolate bar in Figure 19.18 is constructed by a function that is
not increasing, whereas the chocolate bar in Figure 19.19 is formed by an increasing
function; however, these two chocolate bars have the same mathematical structure as
a game.
5 Unsolved problems
Certain chocolate bars remain that have not been considered.
Here we present a chocolate bar with two steps; see Figure 19.20. This chocolate
bar is represented by three coordinates x, y, z; the reduction of z may simultaneously
386 | R. Miyadera and Y. Nakaya
affect the first and second coordinates x and y. The relationships between these three
coordinates are expressed by the following two inequalities:
z+3 z+3
x≤⌊ ⌋ and y≤⌊ ⌋.
2 2
The result in [2] can be applied to this type of chocolate bar; however, this proof may
be complicated compared to that in [2].
As another type of chocolate bar, we consider a three-dimensional bar with an up-
per and lower structure. An example of this type of chocolate bar is shown in Fig-
ure 19.21.
The results of the present work can be used to study this type of chocolate bar. More-
over, research on the previously mentioned chocolate bar with two steps should be
conducted in the future.
The chocolate bars shown in Figures 19.20 and 19.21 appear to be simple gener-
alizations of the chocolate bars studied here and in [2]; however, they are technically
complex, and their investigation may prove challenging.
Grundy numbers of impartial three-dimensional chocolate-bar games | 387
Bibliography
[1] A. C. Robin, A poisoned chocolate problem, Math. Gaz. 73(466) (1989), 341–343.
[2] S. Nakamura, R. Miyadera and Y. Nakaya, Impartial chocolate bar games, Grundy numbers of
impartial chocolate bar games, Integers 20 (2020), #G1.
[3] M. H. Albert, R. J. Nowakowski and D. Wolfe, Lessons In Play: An Introduction to Combinatorial
Game Theory, Second Edition, A K Peters/CRC Press, Natick, MA., United States, 2019.
[4] A. N. Siegel, Combinatorial Game Theory, volume 146 of Graduate Studies in Mathematics,
American Mathematical Society, Providence, RI, 2013.
Aaron N. Siegel
On the structure of misère impartial games
Dedicated to the memory of John Conway and Dan Hoey, with whom much of the material in this
paper was joint work
Various authors have discussed the “disjunctive compound” of games with the last player win-
ning (Grundy [5]; Guy and Smith [7]; Smith [11]). We attempt here to analyse the disjunctive com-
pound with the last player losing, though unfortunately with less complete success . . . .
They understood the proviso and its role in misère simplification (see Section 2.3),
and they held out hope that new techniques might be discovered that would lead to
additional simplification rules. Those hopes were dashed by Conway, who proved in
the early 1970s that if a game G cannot be simplified via the Grundy–Smith rule, then
in fact G is in simplest form. This makes application of the canonical misère theory
essentially intractable in the general case, and subsequent analyses of misère games
have focused on alternative reductions, such as the quotient construction due to Plam-
beck [8] and Plambeck and Siegel [9].
Nonetheless, the canonical misère theory—despite its limited practical utility—
gives rise to a fascinating structure theory. Define misère game values in the usual
https://doi.org/10.1515/9783110755411-020
390 | A. N. Siegel
where o− (G) denotes the misère outcome of G. The set of misère game values forms a
commutative monoid ℳ, and it is an alluring problem to study the structure of this
monoid for its own sake. Conway proved in the 1970s that ℳ is cancellative; hence it
embeds in its group of differences 𝒟, and we have a rather curious Abelian group that
arises cleanly “in nature.”
Several results on the structure of ℳ (and 𝒟) are stated in ONaG without proof.
The aim of this paper is twofold: first, to gather together what is known about the
structure of ℳ, including proofs of previously known results, into a single narrative;
and second, to extend somewhat the frontier of knowledge in this area.
In the former category, we include in particular a proof of the cancellation the-
orem, a derivation of the exact count of |ℳ6 | (the number of distinct games born by
day 6), and a proof that every game G can be partitioned, nonuniquely, into just finitely
many prime parts (a definition is given in Section 7). All three results are due to Con-
way [3], although the count of |ℳ6 | was stated inaccurately in ONaG and later cor-
rected by Thompson [12].
We also offer a smattering of new results:
– In Section 3, we show that Conway’s mate construction is not invariant under
equality. In retrospect, this should perhaps not be shocking, but it came as a sur-
prise when it was discovered.
– In Section 4 (with additional details given in Appendix A), we show how to com-
pute |ℳ7 |, the exact number of games born by day 7. (The output of this calcula-
tion, though it fits comfortably in computer memory, is too large to include in its
entirety in a journal paper.)
– In Section 6, we show that 𝒟 is “almost” torsion-free (in precise terms, it is torsion-
free modulo association, as defined in Section 6). The proof is not especially diffi-
cult, but it is not obvious; Conway previously gave thought to this question, writ-
ing in ONaG: “Further properties of the additive semigroup of games seem quite
hard to establish—if G+G = H +H is G necessarily equal to H or H +1?” (Theorem 29
implies an affirmative answer).
– In Sections 6 and 7, we extend and elaborate on Conway’s theory of prime parti-
tions. As one application of this work, we show that all games born by day 6 have
the unique partition property.
The first and last results were joint work with John Conway and Dan Hoey, conducted
in Princeton during the 2006–2007 academic year. In addition, a computational en-
gine for misère games, written by Hoey as an extension to cgsuite, has proved invalu-
able in assisting the work in this paper.
On the structure of misère impartial games | 391
2 Prerequisites
We briefly review the notation and foundational material for misère games. Results in
this section are stated without proof and are originally due to Grundy and Smith [7]
and Conway [3]. A full exposition, including proofs for all results stated here, can be
found in [10, Sects. V.1–V.3].
Formally, an impartial game is identified with the set of its options, so that G′ ∈ G
means “G′ is an option of G”. We write G ≅ H to mean that G and H are identical as
sets; thus it is possible that G = H but G ≇ H.
It is customary to define
0 = {},
∗ = {0},
∗2 = {0, ∗},
∗m = {0, ∗, ∗2, . . . , ∗(m − 1)}.
Since only impartial games are under consideration in this paper, we will follow Con-
way’s convention and drop the asterisks, writing
0, 1, 2, . . . , m, . . .
in place of
0, ∗, ∗2, . . . , ∗m, . . . .
Thus 2 + 2 is not the integer 4; it is the game obtained by playing two copies of ∗2
side-by-side.
The following conventions will also be used freely. Options of G may be writ-
ten by concatenation, rather than using set notation; for example, 632 is the game
{∗6, ∗3, ∗2}, not the integer six hundred thirty-two. Subscripts denote sums of games:
42 is ∗4 + ∗2, and 6422 is ∗6 + ∗4 + ∗2 + ∗2. Finally, we write G# (pronounced “G sharp”)
for the singleton {G}. Sometimes, we will employ a chain of sharps and subscripts,
which should be read left-to-right; for example,
We write ℳ for the commutative monoid of all (finite) misère impartial game val-
ues. This monoid can be stratified according to the usual hierarchy:
b(0)
̃ = 0; b(G)
̃ = max{b(G
̃ ′ ) + 1 : G′ ∈ G}.
392 | A. N. Siegel
It is clear from this definition that b(G) depends only on the value of G, so that we
may write, for n ≥ 0,
ℳn = {G ∈ ℳ : b(G) ≤ n},
ℳ = ⋃ ℳn .
n
For a set X, we will write |X| for the cardinality of X, so that |ℳn | is the number of
distinct games born by day n.
Definition 2. Let G ≅ {G1′ , . . . , Gk′ }. Let H be a game whose options include those of G:
G ≅ {a1 , a2 , . . . , ak }.
(Here, as usual, mex(X) denotes the minimal excluded value of X: the least m ≥ 0
with m ∈ ̸ X.)
On the structure of misère impartial games | 393
1 if G ≅ 0,
G− = { ′ −
{(G ) : G ∈ G} otherwise.
′
Theorem 8 (Simplest form theorem). Suppose that neither G nor H has any reversible
options, and assume that G = H. Then G ≅ H.
If G has no reversible options, then we say that G is in canonical form (or simplest
form). It is a remarkable fact that reversible moves can only arise in the context of
Grundy–Smith simplification.
Theorem 9. Suppose that every option of H is in canonical form and that some option
of H is reversible through G. Then H simplifies to G.
o(G + T) = o(H + T) = P
for some T.
Lemma 11. Let G and H be games. Then G is linked to H if and only if G = no H ′ and no
G′ = H.
Theorem 12. Let G and H be games. G = H if and only if the following four conditions
hold:
(i) G is linked to no H ′ ;
(ii) No G′ is linked to H;
(iii) If G ≅ 0, then H is an N -position;
394 | A. N. Siegel
3 Concubines
Conway introduced the mate G− as a stepping stone to the simplest form theorem.
The terminology perhaps suggests invariance of form, and we might be tempted to
suppose that G = H implies G− = H − , that G−− = G, and so forth; but in this section,
we show that essentially all such assertions are false. These are not especially deep
observations, but they do not appear to have been pointed out before.
As an example, let G = (2## 1)# . It is easily seen to be in simplest form. However,
G− = (2## 0)# , and since this is an N -position, the proviso is satisfied, and the unique
option 2## 0 reverses through 0. Therefore G− = 0. Likewise, if H ≅ (2## 0)# , then we
have H = 0, but H − = (2## 1)# ≠ 1.
Definition 13. Suppose that G and H are in simplest form. We say that H is a concubine
of G if H − = G, but G− ≠ H.
(2## 1)# if G = 0,
c(G) = {
{c(G )} otherwise.
′
Since the mate of (2## 1)# is equal to 0, it is immediately clear that c(G)− = G. It remains
to show that c(G) is in simplest form. Suppose (for contradiction) some c(G) is not,
and choose G to be a minimal counterexample. Then c(G) simplifies to some game H.
If c(G′ )′ = c(G′′ ) in all cases, then G is obtained from G′′ by adding reversible op-
tions, contradicting the assumption that G is in canonical form. (The proviso is clearly
satisfied, since it is easily seen that o(G) = o(c(G)).) Otherwise, we must have some
G′ = 0 and H = c(G′ )′ = 2## 1. This means that c(G) must have 1 as an option. Since ev-
ery option of c(G) has the form c(G′ ), and since each c(G′ ) is canonical (by minimality
of G), this gives the desired contradiction.
and indeed there are arbitrarily long chains G, G− , G−− , G−−− , . . . of distinct games
(where it is understood that at each iteration, we pass to the canonical form).
On the structure of misère impartial games | 395
|ℳ0 | = 1
|ℳ1 | = 2
|ℳ2 | = 3
|ℳ3 | = 5
|ℳ4 | = 22
|ℳ5 | = 4171780
0, 1, 2, 3, 4, 2# , 3# , and 32.
396 | A. N. Siegel
G |𝒮4G | Adjustment
2# 14 −214 + 1
3 10 −210 + 1
2 12 −212 + 1
1 9 −29 + 1
0 10 −210 + 1
(Proviso) +29 − 1
222 − 214 − 210 − 212 − 29 − 210 + 29 + 4 = 4171780.
Figure 20.2: The number of games born by day 5.
(All other combinations of nimbers contain either 0 or 1, and therefore reduce to a nim-
ber by the mex rule).
Now suppose 2# ∈ G and G is reducible. G must simplify to 2, since 2 is the only
option of 2# ; therefore 0 ∈ G and 1 ∈ G, and all other options of G must contain 2.
The only possibilities are 2# 10 and 2# 310. So of the 24 subsets containing 2# , two are
reducible, and the other fourteen are not. Therefore
|ℳ4 | = 8 + 14 = 22.
H ≅ G ∪ {H1′ , . . . , Hk′ },
G
𝒮4 = {H ∈ ℳ4 : G ∈ H}.
When G ≇ 0, the added reversible moves {H1′ , . . . , Hk′ } can be any nonempty subset of
G
𝒮4G , so exactly 2|𝒮4 | − 1 games simplify to G.
When G ≅ 0, the proviso requires additionally that o(H) = N , so that at least one
of H1′ , . . . , Hk′ must be a P -position. So if H1′ , . . . , Hk′ are all N -positions with 0 ∈ Hi′ ,
On the structure of misère impartial games | 397
G |𝒮5G | Adjustment
21 13 11 9 8 8
2# 3210 2 −2 −2 −2 −2 −2 −22085888 + 1
2# 321 221 − 213 − 211 − 29 − 28 −22086144 + 1
2# 320 221 − 213 − 211 − 29 − 28 −22086144 + 1
21 13 11 9
2# 32 2 −2 −2 −2 −22086400 + 1
2# 31 221 − 213 − 29 − 28 −22088192 + 1
21 13
2# 30 2 −2 − 29 − 28 −22088192 + 1
21 13 9
2# 3 2 −2 −2 −22088448 + 1
2# 210 221 − 213 − 211 − 28 − 28 −22086400 + 1
2# 21 221 − 213 − 211 − 28 −22086656 + 1
21 13 11
2# 20 2 −2 −2 − 28 −22086656 + 1
21 13 11
2# 2 2 −2 −2 −22086912 + 1
2# 1 221 − 213 − 28 −22088704 + 1
2# 0 221 − 213 − 28 −22088704 + 1
2## 221 − 213 −22088960 + 1
32 221 − 211 − 29 −22094592 + 1
21
3# 2 − 29 −22096640 + 1
4 221 − 211 − 29 − 28 − 28 −22094080 + 1
3 221 − 211 − 28 − 28 −22094592 + 1
21 11
2# 2 −2 −22095104 + 1
2 221 − 28 − 28 −(214 −1) −(210 −1) −22079234 + 1
1 221 − 29 −(212 −1) −(210 −1) −22091522 + 1
0 221 −(212 −1) −(210 −1) −(29 −1) −22091523 + 1
21
(Proviso) 2 −(212 −1) −(210 −1) −217 +21960962 − 1
then in fact H is canonical. We must therefore add back such subsets into the count.
This gives rise to the additional “proviso” term in Figure 20.2; the exponent is the count
of N -positions in ℳ4 that contain 0 as an option.
( 24171779 − 22085887 )
−2
( 24171779 − 22086143 + 1 )
−2
( 24171779 − 22086143 − 22085887 + 1 )
−2
+ 4171779
The calculation of the critical exponents |𝒮5G | is by recursive application of the same
principle. For a given G ∈ ℳ4 , there are exactly 221 subsets of ℳ4 containing G, so we
subtract from 221 the number of reducible such subsets.
Figure 20.3 breaks down the calculation of |𝒮5G | for each G. For H ⊂ ℳ4 with G ∈ H,
there are two distinct ways that H might be reducible:
(i) H simplifies to some G′ ∈ G, so that G itself is reversible (together with various
other options of H); or
(ii) H simplifies to some other K ∈ ℳ3 with G ∈ K (so that G is not reversible, but
remains as an option of the simplified game K).
Case (i) yields one term for each G′ ∈ G, and since G′ ∈ ℳ3 , there are at most five such
terms (the maximum is achieved for G = 2# 3210). Case (ii) requires that G ∈ ℳ2 ; hence
it is only relevant for G = 0, 1, 2, explaining the special structure of those three rows in
Figure 20.3.
The precise details of how the terms in Figure 20.3 are calculated are fairly subtle.
Since the calculation of |ℳ6 | is only slightly easier than the general case, those details
are deferred until Appendix A.
in which all the exponents a1 , . . . , ak , b are close to 24171779 , with b (the “proviso
term”) somewhat smaller than the others. In the initial calculation, there are pre-
cisely 4171780 terms −2ai , and by combining like terms we can reduce this number to
758660. The resulting expression is obviously too large to publish in a journal paper,
but it is small enough to fit comfortably in computer memory and is therefore easily
computable. A partial expansion is given in Figure 20.4.
The same method works in theory to compute |ℳ8 | (and higher), but the expres-
sion for |ℳ8 | would have a number of terms on the order of |ℳ6 |. In some sense, the
chained powers of two in the expression for |ℳn | encode the entire structure of ℳn−2 ,
and so we have reached the practical limit of this calculation. We now turn our atten-
tion to the abstract structure of ℳ itself.
The cancellation theorem was discovered by Conway in the 1970s, and it is stated
without proof in ONaG. A proof has been published once before, in Dean Allemang’s
1984 thesis [1]. Since the proof is fairly tricky and is essential to the succeeding anal-
ysis, we give a full exposition here.
Definition 16. We say that H is a part of G if G = H + X for some X. In this case, we say
that H + X is a partition of G and X is the counterpart of H in G.
X = X + (X ′′ + Y) = X ′′ + (X + Y) = X ′′ ,
Definition 18 (Conway [4]). We say that T is cancellable if, for all G and H,
(i) G + T = H + T implies G = H, and
(ii) G ⋈ H implies G + T ⋈ H + T.
Proof. The proof is by induction on T. For T ≅ 0, (a) is trivial, and (b) follows from
Lemma 17, so assume that T ≇ 0. Since the statements are independent of the form
of T, we can furthermore assume that T is given in simplest form. We will first prove (b)
and then (a).
(b) Assume (for contradiction) that T has infinitely many distinct parts X1 , X2 , . . . ,
and write
T = X1 + Y1 = X2 + Y2 = ⋅ ⋅ ⋅ .
T = H + T = H + H + T = H + H + H + T = ⋅⋅⋅.
Since T has finitely many parts, we must have m ⋅ H = n ⋅ H for some m < n. Since T ′ is
cancellable and H is a part of T ′ , by Lemma 19 H is cancellable. Therefore (n−m)⋅H = 0.
By Lemma 17 we have H = 0 or 1, but Proposition 7 gives H ≠ 1.
A simple corollary of the cancellation theorem will prove to be useful.
Corollary 21. Suppose G + X = Y with G in simplest form. For every option G′ ∈ G, either
G′ + X ′ = Y, or G′ + X = Y ′ .
Lemma 22 (Difference lemma). If G and H are in simplest form and G − H exists, then
(a) either G − H = G′ − H ′ for some G′ and H ′ , or
(b) every G′ − H and G − H ′ exists, and G − H = {G′ − H, G − H ′ }.
Definition 23. Let X be a part of G. We say that X is novel if every G − X ′ and every
G′ − X exists. Otherwise, we say that X is derived. We say that a partition G = X + Y is
novel if either X or Y is novel and derived if both parts are derived.
402 | A. N. Siegel
6.1 Parity
Definition 25. We say that U is a unit if it has an inverse. If G = H + U for some unit U,
then we say that G and H are associates and write G ≈ H.
By Lemma 17 the only units are 0 and 1. Thus by Proposition 7 every game G has
exactly two associates, G and G + 1, and this induces a natural pairing among games.
We now introduce a convenient way to distinguish between the elements of each pair.
Definition 26. If (the simplest form of) G is an option of (the simplest form of) G + 1,
then we say that G is even and G + 1 is odd.
Proposition 27 (Conway [3]). Every game G is either even or odd, but not both.
Proof. Assume that G is in simplest form. If G is not even, then G must be a reversible
option of G + 1, so that G + 1 = G′ . Therefore G is odd.
Moreover, if G is even, then it is a canonical option of G+1 and hence not reversible.
Therefore G + 1 ≠ G′ for every G′ . So G cannot be both even and odd.
Proof. Suppose G and H are both even, and assume (for contradiction) that G + H is
reversible in G + H + 1. Without loss of generality, G′ + H = G + H + 1. By cancellation,
G′ = G + 1, contradicting the assumption that G is even.
On the structure of misère impartial games | 403
𝒟 = {G − H : G, H ∈ ℳ}
X ′ = (n − 1) ⋅ G + G′ = (n − 1) ⋅ H + H ′ (†)
n ⋅ (n − 1) ⋅ G + n ⋅ G′ = n ⋅ (n − 1) ⋅ H + n ⋅ H ′ .
n ⋅ G′ = n ⋅ H ′ .
Now G′ and H ′ have the same parity by (†). So by induction on G and H we may assume
that G′ = H ′ . But now cancellation on (†) gives
(n − 1) ⋅ G = (n − 1) ⋅ H,
7 Primes
Definition 31. A part H of G is said to be proper if H ≉ 0 or G.
Definition 32. We say that G is prime if G is not a unit and has no proper parts.
Proof. By Lemma 20, every game has just finitely many parts. We can therefore prove
the theorem by induction on the number of proper parts of G.
If G itself is prime, then there is nothing to prove. Otherwise, we can write G =
X + Y, where X and Y are proper parts of G. Now every proper part of X is a proper
part of G, but X is not a proper part of X. Therefore X has strictly fewer proper parts
than G. By induction X has a prime partition. By the same argument so does Y, and
we are done.
It is important to note that a partition of G into primes need not be unique. For
example, we can show that
(4 + 2)# = 2 + P = 4 + Q,
where P and Q are distinct primes. (This example is originally due to Conway and Nor-
ton.) The behavior of primes can often be quite subtle. It is possible for G to have sev-
eral prime partitions of different lengths:
(4 + 2# )# = 2 + P1 + P2 = 4 + Q,
where P1 , P2 , and Q are all distinct primes. Furthermore, G + G might have a prime part
that is not a part of G. For example, if G = (4 + 2# )## , then there exists a partition of
G + G into exactly three primes.
Although these examples advise caution, we can nonetheless discern some useful
structure among primes. In the following propositions, we assume G to be given in
simplest form.
Proposition 35. If G has a prime option, then G has at most two even prime parts.
Proof. Fix a prime option G′ . First, suppose G = X + Y + Z for any three games X, Y,
and Z. By Corollary 21 we have G′ = X ′ + Y + Z without loss of generality. Since G′ is
prime, one of Y or Z must be a unit. This shows that every partition of G involves at
most two primes.
On the structure of misère impartial games | 405
G′ = P1′ + P2 = Q′1 + Q2 .
Since G′ is prime, P1′ and Q′1 must be units, so P2 ≈ Q2 are equal up to a unit. By can-
cellation, P1 ≈ P2 as well.
Corollary 36. If G has at least three prime options, distinct modulo association, then G
is prime.
Proof. Suppose G has a prime option but is not itself prime. By Proposition 35, G has a
unique prime partition G = P + Q. Therefore every G′ = P ′ + Q or P + Q′ . In particular, if
G′ is prime, then G′ ≈ P or Q. So G has at most two prime options up to association.
Proof. Suppose every option of G is prime, but G is not. By Lemma 35, if G ≠ 0, then we
can write G = P + Q for suitable primes P and Q, and furthermore P and Q are unique
(up to association). Assume that each of G, P, and Q is given in simplest form.
Now G cannot be odd, since then G + 1 would be a prime option of G. So G is even,
and we may therefore assume that P and Q are both even. Now for every option P ′ , we
have either G′ = P ′ + Q or G = P ′ + Q′ . If G′ = P ′ + Q, then since G′ is prime, we must
have G′ ≈ Q, so P ′ is a unit. Suppose instead that G = P ′ + Q′ . We cannot have P ′ ≈ P,
since P is even. Furthermore, we cannot have P ′ ≈ Q: since the partition of G into P + Q
is unique, this would imply Q′ ≈ P, so that P ′′ ≈ P, contradicting the assumption that
P is in simplest form. Therefore either P ′ is a unit and Q′ ≈ G, or else Q′ is a unit and
P ′ ≈ G.
We have therefore shown that every option of P is either 0, 1, G, or G + 1. By sym-
metry the same is true for Q. However, for every option G′ , we have G′ = P ′ + Q or
G′ = P + Q′ . Since G′ is prime, this implies G′ ≈ P or Q. Therefore G cannot be an
option of both P and Q: this would imply that P (or Q) is associated with one of its fol-
lowers. Therefore one of P or Q has only 0 and 1 as options. Without loss of generality,
assume that it is P. Since P is prime, we must have P = 2.
If also Q = 2, then G = 2+2 = 32, and we are done. Otherwise, Q has G as a follower.
But this means that no follower of G can be associated with Q. Thus every option of G
is associated with P = 2, and this completes the proof.
The above propositions suggest that composite games are relatively rare. This can
be made precise by considering the number of composite games born by day n. It suf-
fices to consider only even composites, since the number of odd composites born by
day n is precisely equal to the number of even composites born by day n − 1.
406 | A. N. Siegel
Figure 20.6: The nine highly composite even games born by day 5, listed with number of prime parts.
There are six even composites born by day 4. These and their unique partitions are
summarized in Figure 20.5. A computer search revealed exactly 490 even composites
born by day 5. Of these, 481 have exactly two even prime parts. Figure 20.6 lists the
nine examples with more than two parts.
We have already noted that (4 + 2)# does not have the UPP. In this section, we will
prove that every game born by day 6 has the UPP. Since (4 + 2)# is born on day 7, it is
therefore a minimal example.
Definition 39. We say G is a biprime if G has exactly two even prime parts.
Proposition 40. Suppose that G has a biprime option, say G′ = R+S with R and S prime.
Then either
(a) G is itself a prime or a biprime; or
(b) There is a prime P such that G = P + R + S, and this is the unique prime partition
of G; or
(c) There are primes P and Q such that G = P + R = Q + S, and these are the only two
prime partitions of G.
G = P1 + ⋅ ⋅ ⋅ + Pk
On the structure of misère impartial games | 407
G′ = P1′ + P2 + ⋅ ⋅ ⋅ + Pk .
Since G′ is a biprime, it must be the case that k = 3 and P1′ is a unit, and without loss
of generality, P2 ≈ R and P3 ≈ S. Let us prove that this is the unique prime partition
of G. Let
G = Q1 + ⋅ ⋅ ⋅ + Ql
G′ = Q′1 + Q2 + ⋅ ⋅ ⋅ + Ql .
Case 2: Next, suppose that every prime partition of G has exactly two primes. Con-
sider any such partition
G = P1 + P2 .
G′ = P1′ + P2 .
Corollary 41. Suppose G is a game born by day 6 without the UPP. Assume that at least
one option of G is a biprime. Then, in fact,
G = 2 + P = Q1 + Q2
Proof. This is just Proposition 40, together with the (computationally verifiable) fact
that 2 is a part of every composite game born by day 5.
408 | A. N. Siegel
Lemma 42. Suppose G is a game born by day 6 without the UPP. Suppose G has a
biprime option 2 + R. If some other option G′ does not have R as a part, then G = R + S
for some part S of G′ .
Proof. Let G be a game born by day 6. If G has any prime options, then G is a biprime,
so it has the UPP. Likewise, if G is odd, then G + 1 is an even game born by day 5, with
the same parts as G. We know that every game born by day 5 has the UPP, so such G
must also have the UPP. Thus we need only consider even games whose options are
all composite.
Now let
We noted previously that |𝒞 | = 10. Thus there are 210 subsets of 𝒞 , and a computer
search can rapidly verify that all of them have the UPP.
This leaves only those games with at least one biprime option. We can now apply
the following trick. Let
Let 𝒜 + 𝒜 be the set of all pairwise sums of elements of 𝒜. If G has at least one biprime
option, then by Lemma 42 either:
(i) G ∈ 𝒜 + 𝒜; or
(ii) All options of G share a common part R ≠ 2.
It therefore suffices to exhaust all possibilities for (i) and (ii). For (i), we have |𝒜| <
500, so |𝒜 + 𝒜| < 25000. It is therefore easy to compute the set 𝒜 + 𝒜, and a simple
computation shows that for most G ∈ 𝒜 + 𝒜, we have b(G) > 6. We can then directly
show that the remaining few have the UPP.
To complete the proof, we describe how to exhaust case (ii). For each R ∈ 𝒜, let
𝒞R = {G ∈ 𝒞 : R is a part of G}.
Now ΣR |𝒞R | is small, since the elements of 𝒞 collectively have a small number of parts.
But to address case (ii), we need only consider those games whose options are subsets
of {2 + R} ∪ 𝒞R for some R ≠ 2: these are exactly the games whose options share the
On the structure of misère impartial games | 409
common factor R. We can therefore iterate over all R and all subsets of {2 + R} ∪ 𝒞R ,
checking that each possibility has the UPP.
All of the necessary computations to complete the proof have been implemented
and verified in cgsuite.
G
ℛn = {H ⊂ ℳn−1 : H simplifies to G},
K
𝒮n = {H ∈ ℳn : K ∈ H},
G,K
ℛn = {H ⊂ ℳn−1 : H simplifies to G and K ∈ H},
𝒩n = {H ∈ ℳn : H is an N -position},
0
𝒩n = {H ∈ ℳn : H is an N -position and 0 ∈ H}.
Here ℛG,K
n is defined only when G ∈ K.
∑ ℛGn ,
|ℳn | = 2|ℳn−1 | −
G∈ℳn−2
G
G 2
|𝒮n−1 |
−1 if G ≇ 0,
ℛn = { |𝒮 G | 0
2 n−1 −2 |𝒩n−1 |
if G ≅ 0,
K
− ∑ ℛn − ∑ ℛGn ,
G,K
𝒮n = 2
|ℳn−1 |−1
G∈K K
G∈𝒮n−2
G
G,K 2 |𝒮n−1 |−1
if G ≇ 0 or K is a P -position,
ℛn = { |𝒮 G |−1 0
2 n−1 − 2|𝒩n−1 |−1 if G ≅ 0 and K is an N -position,
(|ℛG,K
n | is defined only when G ∈ K),
+ 1 − ∑ ℛGn ,
|𝒩n | = 2 |ℳn−1 |
−2
|𝒩n−1 |
G∈𝒩n−2
0 G
𝒩n = 2 n−1 − 2 n−1 − ∑ ℛn .
|ℳ |−1 |𝒩 |−1
0
G∈𝒩n−2
|ℳn |. There are 2|ℳn−1 | subsets of ℳn−1 . For each subset H ⊂ ℳn−1 , either H is canon-
ical, or H simplifies to G for a unique G ∈ ℳn−2 .
|ℛGn |. In order for H ⊂ ℳn−1 to simplify to G, it must have the exact form
H ≅ {G1 , . . . , Gk , H1 , . . . , Hl },
H ≅ {G1 , . . . , Gk , H1 , . . . , Hj , K}
as well (since 0 can never be reversible). So these two conditions are both mu-
tually exclusive and entirely contained within the 2|ℳn−1 |−1 subsets originally
counted.
Proof. We can assume that Parts(G′ ) is correctly computed for all G′ ∈ G. Now it is
easy to see that every game that is put into 𝒳 is indeed a part of G: if a partition X + Y
is added in Step 8, then X + Y is novel; if it is added in Step 19, then the condition of
Step 18 directly witnesses the identity G = X + Y.
To complete the proof, we must show that every part of G is eventually placed
in 𝒳 . Suppose not, and let X + Y be a partition of G that the algorithm fails to find.
Assume that X + Y is minimal in the sense of b(X) + b(Y). In particular, at some stage
of the algorithm, we have X ′ , Y ′ ∈ 𝒳 for every partition of the form G = X ′ + Y ′ .
Suppose X + Y is derived. By Lemma 24 there is some G† such that G† = X + Y ′ .
Therefore X ∈ Parts(G† ), so X will be encountered in the main loop of Algorithm 1.
Since X is derived, either some G − X ′ or some G′ − X must fail to exist. If G − X ′ does
not exist, then since G = X + Y, we must have G′ = X ′ + Y for some G′ . Therefore
Y = G′ − X ′ . If G′ − X does not exist, then G′ = X ′ + Y for some X ′ , so again Y = G′ − X ′ .
In either case, Y ∈ 𝒴 (as defined in Algorithm 1). Thus it suffices to verify that G, X,
and Y jointly satisfy the condition of Step 18. By the inductive hypothesis we have
X ′ , Y ′ ∈ 𝒳 whenever G = X ′ + Y ′ , so the condition states precisely that G = X + Y,
which is true. Therefore X and Y are put into 𝒳 in Step 19, a contradiction.
Finally, suppose X + Y is novel, and assume without loss of generality that X is
novel. Then both G−X ′ and G′ −X exist, and it is easily checked that Y = {G−X ′ , G′ −X}.
Since the algorithm fails to detect X + Y at Step 8, it must be the case that X ′ ∈ ̸ 𝒳 for
some X ′ . Now b(X ′ ) < b(X), so by the inductive hypothesis this implies b(G − X ′ ) >
b(Y). Therefore G − X ′ must be a reversible option of Y = {G − X ′ , G′ − X}, so either
G − X ′′ = Y, or G′ − X ′ = Y. The former is obviously false (by cancellation), so we must
have Y = G′ − X ′ , but then the partition X + Y will be detected in Step 19 by the same
argument used in the previous paragraph.
412 | A. N. Siegel
1: 𝒳 ← 0
2: for all G ∈ G do
′
Bibliography
[1] D. T. Allemang, Machine computation with finite games, Master’s thesis, Trinity College,
Cambridge, 1984, http://miseregames.org/allemang/.
[2] C. L. Bouton, Nim, a game with a complete mathematical theory, Ann. of Math. 3 (1901), 35–39.
[3] J. H. Conway, On Numbers and Games, second edition, A K Peters, Ltd./CRC Press, Natick, MA,
2001.
[4] J. H. Conway, Personal communication, 2006.
[5] P. M. Grundy, Mathematics and games, Eureka 2 (1939), 6–8.
[6] P. M. Grundy and C. A. B. Smith, Disjunctive games with the last player losing, Proc. Cambridge
Philos. Soc. 52 (1956), 527–533.
On the structure of misère impartial games | 413
[7] R. K. Guy and C. A. B. Smith, The G-values of various games, Proc. Cambridge Philos. Soc. 52
(1956), 514–526.
[8] T. E. Plambeck, Taming the wild in impartial combinatorial games, Integers 5 (2005), #G05.
[9] T. E. Plambeck and A. N. Siegel, Misère quotients for impartial games, J. Combin. Theory Ser. A
115 (2008), 593–622.
[10] A. N. Siegel, Combinatorial Game Theory, volume 146 in Graduate Studies in Mathematics,
American Mathematical Society, Providence, RI, 2013.
[11] C. A. B. Smith, Compound two-person deterministic games, unpublished manuscript.
[12] C. Thompson, Count of day 6 misere-inequivalent impartial games, posted to usenet
rec.games.abstract on February 19, 1999.
De Gruyter Proceedings in Mathematics
Bruce M. Landman, Florian Luca, Melvyn B. Nathanson, Jaroslav Nešetřil,
Aaron Robertson (Eds.)
Number Theory and Combinatorics. A Collection in Honor of the Mathematics of
Ronald Graham, 2022
ISBN 978-3-11-075343-1, e-ISBN 978-3-11-075421-6
www.degruyter.com