Download as pdf or txt
Download as pdf or txt
You are on page 1of 431

Richard J. Nowakowski, Bruce M. Landman, Florian Luca, Melvyn B.

Nathanson,
Jaroslav Nešetřil, and Aaron Robertson (Eds.)
Combinatorial Game Theory
De Gruyter Proceedings in
Mathematics

|
Combinatorial
Game Theory

|
A Special Collection in Honor of Elwyn Berlekamp, John
H. Conway and Richard K. Guy

Edited by
Richard J. Nowakowski, Bruce M. Landman, Florian Luca,
Melvyn B. Nathanson, Jaroslav Nešetřil, and Aaron
Robertson
Mathematics Subject Classification 2010
05A, 05C55, 05C65, 05D, 11A, 11B, 11D, 11K, 11N, 11P, 11Y, 91A46

Editors
Richard J. Nowakowski Melvyn B. Nathanson
Dalhousie University Lehman College (CUNY)
Dept. of Mathematics & Statistics Department of Mathematics
Chase Building 250 Bedford Park Boulevard West
Halifax NS B3H 3J5 Bronx NY 10468
Canada USA
r.nowakowski@dal.ca Melvyn.Nathanson@lehman.cuny.edu

Bruce M. Landman Jaroslav Nešetřil


University of Georgia Charles University
Department of Mathematics Computer Science Institute (IUUK)
Athens, GA 30602 Malostranske nam. 25
USA 118 00 Praha
Bruce.Landman@uga.edu Czech Republic
Nesetril@iuuk.mff.cuni.cz
Florian Luca
University of Witwatersrand Aaron Robertson
School of Mathematics Colgate University
1 Jan Smuts Avenue Department of Mathematics
Johannesburg 2000 219 McGregory Hall
Republic of South Africa Hamilton NY 13346
Florian.Luca@wits.ac.za USA
arobertson@colgate.edu

ISBN 978-3-11-075534-3
e-ISBN (PDF) 978-3-11-075541-1
e-ISBN (EPUB) 978-3-11-075549-7

Library of Congress Control Number: 2022934372

Bibliographic information published by the Deutsche Nationalbibliothek


The Deutsche Nationalbibliothek lists this publication in the Deutsche Nationalbibliografie;
detailed bibliographic data are available on the Internet at http://dnb.dnb.de.

© 2022 Walter de Gruyter GmbH, Berlin/Boston


Typesetting: VTeX UAB, Lithuania
Printing and binding: CPI books GmbH, Leck

www.degruyter.com
Preface
What is 1 + 1 + 1?
John H. Conway, 1973

Individually, each of Elwyn R. Berlekamp, John H. Conway, and Richard K. Guy have
received much, rightly deserved, praise. Each made lasting contributions to many ar-
eas of mathematics. This volume is dedicated to their work in combinatorial game
theory. It is due to their efforts that combinatorial game theory exists as a subject.

Brief History of how Winning Ways came to be


Bouton first analyzed nim [67], little realizing how central nim was to be. In the next
two decades, other researchers contributed the analysis of a few other, specific games.
The chess champion Emanuel Lasker came close to a complete theory of impartial
games. It was in the 1930s that Grundy [68] and Sprague [72] gave a complete analy-
sis, now known as the Sprague–Grundy theory. Despite being an elegant theory and
easy to apply, the subject languished because there was no clear direction in which
to develop the theory. In the late 1940s, Richard K. Guy rediscovered the theory and
defined the octal games. In 1956, Guy and C. A. B. Smith published The G-values of var-
ious games [42]. This gave the world an infinite number of impartial games and led to
many interesting, easy to state, and yet still unsolved conjectures.
The analysis of partizan games looked out of reach. The Fields’ medalist John Mil-
nor [70] in 1953 published Sums of positional games. This only covered games in which
players gained when they played and was not easy to apply. In 1960, John Conway
met Michael Guy, Richard’s son. Through this friendship, John met Richard and asked
about partizan games. This turned out to be a recurring theme in their work in the
next two decades. Also in 1960, Elwyn Berlekamp got roped into playing 3 × 3 dots-
&-boxes game against a computer. He lost, but knowing about the Sprague–Grundy
theory, he analyzed the game. (Recently, Elwyn claimed that he had never lost a game
since.) Elwyn met Richard at the 1967 Chapel Hill conference and suggested that they
write a book. Richard agreed, got John and Elwyn together in 1969, and work began.
The analysis of each nonimpartial game was well thought out but ad hoc. John, with
his training in set theory, started to see a structure emerging when games were de-
composed into components. He gave the names of 1 and 1/2 to two abstract games and
was delighted (giggled like a baby was the phrase he used) when he discovered that,
as games, 1/2 + 1/2 = 1. He wrote On Numbers and Games [28] in a week. This caused
some friction among the three, but, eventually, work restarted on Winning Ways [3, 4].
R. Austin, S. Devitt, D. Duffus, and myself, as graduate students at Calgary,
scoured the early page-proofs. We suggested numerous jokes and puns. Fortunately,
the authors rejected all of them.

https://doi.org/10.1515/9783110755411-201
VI | Preface

One other person deserves to be mentioned, Louise Guy, Richard’s wife. A gra-
cious lady made every visitor to their house feel welcome. Some people have asked
why the combinatorial game players, Left and Right, are female and male, respectively.
The original reasons have been forgotten, but after Winning Ways appeared, it became
a mark of respect to remember them as Louise and Richard.

Why Elwyn, John, and Richard are important


Many books are written, enjoy a little success, and then are forgotten by all but a
few. On Numbers and Games but especially Winning Ways [3, 4] are still popular to-
day. This popularity is due to the personalities and their approach to mathematics.
All were great ambassadors for mathematics, writing explanatory articles and giving
many public lectures. More than that, they understood that mathematics needs a hu-
man touch. These days, it is easy to get a computer to play a game well, but how do you
get a person to play well? This was one of their aims. Winning Ways is 800+ pages of
puns, humor, easy-to-remember sayings, and verses. These provide great and memo-
rable insights into the games and their structures, and the book is still a rich source of
material for researchers. Mathscinet reports that Winning Ways is cited by over 300 ar-
ticles, Google Scholar reports over 3000 citations. Yet, any reader will be hard pressed
to find a single mathematical proof in the book. Elwyn, John, and Richard wrote it to
entertain, draw in a reader, and give them an intuitive feeling for the games.
After the publication of Winning Ways, even though all were well known for their
research outside of combinatorial game theory, they remained active in the subject.
Each was interested in many parts of the subject, but, very loosely, their main interests
were:
– Elwyn Berlekamp considered the problem of how to define and quantify the no-
tion of the “urgency” of a move. He made great strides with his concept of an
enriched environment [11, 24, 25]. He was also fascinated by go [7, 8, 9, 10, 12, 11]
and dots-&-boxes [13, 18, 23].
– John Conway remained interested in pushing the theory of surreal numbers, par-
ticularly infinite games [30, 37, 41], games from groups and codes [32, 39], and
misère games [35].
– Richard K. Guy retained an interest in subtraction and octal games, writing a book
for inquisitive youngsters [52]. He continued to present the theory as it was [54, 57,
58, 59, 61] and also summarized the important problems [56, 60, 62, 64].

Standing on their shoulders


Most of the papers in this volume can be traced directly back to Winning Ways and
On Numbers and Games, or to the continuing interests of the three. Several though,
Preface | VII

illustrate how far the subject has developed. A general approach of impartial misère
games was only started by Plambeck [71]. A. Siegel (a student of Berlekamp), a major
figure developing this theory, pushes this further in Chapter 20. The theory of parti-
zan misère games was only started in 2007 [69]. Whilst playing in the context of all
misère games, Chapter 10 analyzes a specific game. Chapter 16 contains important re-
sults for analyzing misère dead-ending games. In Winning Ways, dots-&-boxes and
top-entails do not fit into the theory, each in a separate way. They are only partially
analyzed and that via ad hoc methods. Chapter 17 finds a normal play extension that
covers both types of games. (The authors think this would have intrigued them but are
not sure if they would have fully approved.)
Chapters 1, 5–9, 12, 15, 18, and 19 either directly extend the theory or consider a
related game to ones given in Winning Ways. As is evidenced by Richard K. Guy’s early
contributions, it is also important to have new sources of games. These are presented
in Chapters 2, 3, 11, 13, and 14.
Serendipity gave Chapter 4. This paper is the foundation of Chapters 1 and 5. It
gives a simple, effect-for-humans, test for when games are numbers. The authors are
sure that Elwyn, John, and Richard would have started it with a rhyming couplet that
everyone would then remember.
Elwyn, John, and Richard gave freely of their time. Many people will remember the
coffee-time and evenings at the MSRI and BIRS Workshops. Each would be at a large
table fully occupied by anyone who wished to be there, discussing and sometimes
solving problems. Students were especially welcome. All combinatorial games work-
shops now follow this inclusive model. A large number of papers originate at these
workshops, have several coauthors, and include students. They shared their time out-
side of conferences and workshops. Many students will remember those offhand mo-
ments, with one or more of them, that often stretched to hours. I was a second-year
undergraduate student when on meeting John, he immediately asked me what was
1 + 1 + 1? Even after I answered “3”, he still took the time to explain the intricacies of
3-player games. (The question is still unanswered.)
Their wit, wisdom, and willingness to play provided people with pleasure. They
will be sorely missed, but their legacy lives on.

Richard J. Nowakowski
VIII | Preface

Books and Papers in Combinatorial Game Theory


authored by Elwyn R. Berlekamp, John H. Conway,
and Richard K. Guy, plus six other seminal papers
[1] Elwyn R. Berlekamp. Some recent results on the combinatorial game called Welter’s Nim. In
Proc. 6th Ann. Princeton Conf. Information Science and Systems, pages 203–204, 1972.
[2] Elwyn R. Berlekamp. The hackenbush number system for compression of numerical data.
Inform. and Control, 26:134–140, 1974.
[3] Elwyn R. Berlekamp, John H. Conway, and Richard K. Guy. Winning Ways for Your Mathematical
Plays. Vol. 1. Academic Press Inc. [Harcourt Brace Jovanovich Publishers], London, 1982. Games
in general.
[4] Elwyn R. Berlekamp, John H. Conway, and Richard K. Guy. Winning Ways for Your Mathematical
Plays. Vol. 2. Academic Press Inc. [Harcourt Brace Jovanovich Publishers], London, 1982.
Games in particular.
[5] Elwyn R. Berlekamp. Blockbusting and domineering. J. Combin. Theory (Ser. A), 49:67–116,
1988. An earlier version, entitled Introduction to blockbusting and domineering, appeared in:
The Lighter Side of Mathematics, Proc. E. Strens Memorial Conf. on Recr. Math. and its History,
Calgary, 1986, Spectrum Series (R. K. Guy and R. E. Woodrow, eds.), Math. Assoc. of America,
Washington, DC, 1994, pp. 137–148.
[6] Elwyn R. Berlekamp. Two-person, perfect-information games. In The Legacy of John von
Neumann (Hempstead NY, 1988), Proc. Sympos. Pure Math., volume 50, pages 275–287. Amer.
Math. Soc., Providence, RI, 1990.
[7] Elwyn R. Berlekamp. Introductory overview of mathematical Go endgames. In Proceedings of
Symposia in Applied Mathematics, Combinatorial Games, volume 43, pages 73–100. American
Mathematical Society, 1991.
[8] Elwyn R. Berlekamp. Introductory overview of mathematical go endgames. In R. K. Guy, editor,
Proc. Symp. Appl. Math., Combinatorial Games, volume 43, pages 73–100. Amer. Math. Soc.,
Providence, RI, 1991.
[9] Elwyn R. Berlekamp and David Wolfe. Mathematical Go: Chilling Gets the Last Point. A K Peters,
Ltd., Wellesley, Massachusetts, 1994.
[10] Elwyn R. Berlekamp and David Wolfe. Mathematical Go Endgames: Nightmares for the
Professional Go Player. Ishi Press International, San Jose, London, Tokyo, 1994.
[11] Elwyn R. Berlekamp and Y. Kim. Where is the “thousand-dollar ko?”. In R. J. Nowakowski,
editor, Games of No Chance, Proc. MSRI Workshop on Combinatorial Games, July, 1994,
Berkeley, CA, MSRI Publ., volume 29, pages 203–226. Cambridge University Press, Cambridge,
1996.
[12] Elwyn R. Berlekamp. The economist’s view of combinatorial games. In Richard J. Nowakowski,
editor, Games of No Chance, MSRI, volume 29, pages 365–405. Cambridge Univ. Press,
Cambridge, 1996.
[13] Elwyn R. Berlekamp. The Dots and Boxes Game: Sophisticated Child’s Play. A K Peters, Ltd.,
Natick, MA, 2000.
[14] Elwyn R. Berlekamp. Sums of N × 2 amazons. In F. T. Bruss and L. M. Le Cam, editors, Lecture
Notes – Monograph Series, volume 35, pages 1–34. Institute of Mathematical Statistics,
Beechwood, Ohio, 2000. Papers in honor of Thomas S. Ferguson.
[15] Elwyn R. Berlekamp, John H. Conway, and Richard K. Guy. Winning Ways for Your Mathematical
Plays. Vol. 1, second edition. A K Peters, Ltd., 2001.
Preface | IX

[16] Elwyn R. Berlekamp. The 4g4g4g4g4 problems and solutions. In Richard J. Nowakowski,
editor, More Games of No Chance, MSRI Publications, volume 42, pages 231–241. Cambridge
University Press, 2002.
[17] Elwyn R. Berlekamp. Four games for Gardner. In D. Wolfe and T. Rodgers, editors, Puzzler’s
Tribute: A Feast for the Mind, pages 383–386. A K Peters, Ltd., Natick, MA, 2002. Honoring
Martin Gardner.
[18] Elwyn R. Berlekamp and K. Scott. Forcing your opponent to stay in control of a loony
dots-and-boxes endgame. In Richard J. Nowakowski, editor, More Games of No Chance, MSRI
Publications, volume 42, pages 317–330. Cambridge University Press, 2002.
[19] Elwyn R. Berlekamp. Idempotents among partisan games. In Richard J. Nowakowski, editor,
More Games of No Chance, MSRI Publications, volume 42, pages 3–23. Cambridge University
Press, 2002.
[20] Elwyn R. Berlekamp, John H. Conway, and Richard K. Guy. Winning Ways for Your Mathematical
Plays. Vol. 2, second edition. A K Peters, Ltd., 2003.
[21] Elwyn R. Berlekamp, John H. Conway, and Richard K. Guy. Winning Ways for Your Mathematical
Plays. Vol. 3, second edition. A K Peters, Ltd., 2003.
[22] Elwyn R. Berlekamp, John H. Conway, and Richard K. Guy. Winning Ways for Your Mathematical
Plays. Vol. 4, second edition. A K Peters, Ltd., 2004.
[23] Elwyn R. Berlekamp. Yellow-brown hackenbush. In Michael H. Albert and Richard J.
Nowakowski, editors, Games of No Chance 3, MSRI, volume 56, pages 413–418. Cambridge
Univ. Press, 2009.
[24] Elwyn R. Berlekamp and Richard M. Low. Entrepreneurial chess. Internat. J. Game Theory,
47(2):379–415, 2018.
[25] Elwyn R. Berlekamp. Temperatures of games and coupons. In Urban Larsson, editor, Games of
No Chance 5, Mathematical Sciences Research Institute Publications, volume 70, pages 21–33.
Cambridge University Press, 2019.
[26] John H. Conway. All numbers great and small. Res. Paper No. 149, Univ. of Calgary Math. Dept.,
1972.
[27] John H. Conway and H. S. M. Coxeter. Triangulated polygons and frieze patterns. Math. Gaz.,
57:87–94; 175–183, 1973.
[28] John H. Conway. On Numbers and Games. Academic Press, 1976.
[29] John H. Conway. All games bright and beautiful. Amer. Math. Monthly, 84(6):417–434, 1977.
[30] John H. Conway. Loopy games. Ann. Discrete Math., 3:55–74, 1978. Advances in graph theory
(Cambridge Combinatorial Conf., Trinity College, Cambridge, 1977).
[31] John H. Conway. A gamut of game theories. Math. Mag., 51(1):5–12, 1978.
[32] John H. Conway and N. J. A. Sloane. Lexicographic codes: error-correcting codes from game
theory. IEEE Trans. Inform. Theory, 32(3):337–348, 1986.
[33] John H. Conway. More ways of combining games. In R. K. Guy, editor, Proc. Symp. Appl. Math.,
Combinatorial Games, volume 43, pages 57–71. Amer. Math. Soc., Providence, RI, 1991.
[34] John H. Conway. Numbers and games. In R. K. Guy, editor, Proc. Symp. Appl. Math.,
Combinatorial Games, volume 43, pages 23–34. Amer. Math. Soc., Providence, RI, 1991.
[35] W. L. Sibert and J. H. Conway. Mathematical Kayles. Internat. J. Game Theory, 20(3):237–246,
1992.
[36] John H. Conway. On numbers and games. In Summer Course 1993: The Real Numbers (Dutch),
CWI Syllabi, volume 35, pages 101–124. Math. Centrum Centrum Wisk. Inform., Amsterdam,
1993.
[37] John H. Conway. The surreals and the reals. Real numbers, generalizations of the reals,
and theories of continua. In Synthese Lib., volume 242, pages 93–103. Kluwer Acad. Publ.,
Dordrecht, 1994.
X | Preface

[38] John H. Conway. The angel problem. In R. J. Nowakowski, editor, Games of No Chance, Proc.
MSRI Workshop on Combinatorial Games, July, 1994, Berkeley, CA, MSRI Publ., volume 29,
pages 3–12. Cambridge University Press, Cambridge, 1996.
[39] John H. Conway. m13 . In Surveys in Combinatorics, London Math. Soc., Lecture Note Ser.,
volume 241, pages 1–11. Cambridge Univ. Press, Cambridge, 1997.
[40] John H. Conway. On Numbers and Games, 2nd edition. A K Peters, Ltd., 2001. First edition
published in 1976 by Academic Press.
[41] John H. Conway. More infinite games. In Richard J. Nowakowski, editor, More Games of No
Chance, MSRI Publications, volume 42, pages 31–36. Cambridge University Press, 2002.
[42] Richard K. Guy and Cedric A. B. Smith. The G-values of various games. Proc. Camb. Phil. Soc.,
52:514–526, 1956.
[43] Richard K. Guy. Twenty questions concerning Conway’s sylver coinage. Amer. Math. Monthly,
83:634–637, 1976.
[44] Richard K. Guy. Games are graphs, indeed they are trees. In Proc. 2nd Carib. Conf. Combin. and
Comput., pages 6–18. Letchworth Press, Barbados, 1977.
[45] Richard K. Guy. Partisan and impartial combinatorial games. In Combinatorics (Proc. Fifth
Hungarian Colloq., Keszthely, 1976), Vol. I, Colloq. Math. Soc. János Bolyai, volume 18, pages
437–461. North-Holland, Amsterdam, 1978.
[46] Richard K. Guy. Partizan and impartial combinatorial games. Colloq. Math. Soc. János Bolyai,
18:437–461, 1978. Proc. 5th Hungar. Conf. Combin. Vol. I (A. Hajnal and V. T. Sós, eds.),
Keszthely, Hungary, 1976, North-Holland.
[47] Richard K. Guy. Partizan games. In Colloques Internationaux C. N. R. No. 260 — Problèmes
Combinatoires et Théorie des Graphes, pages 199–205, 1979.
[48] Richard K. Guy. Anyone for twopins? In D. A. Klarner, editor, The Mathematical Gardner, pages
2–15. Wadsworth Internat., Belmont, CA, 1981.
[49] Richard K. Guy. Graphs and games. In L. W. Beineke and R. J. Wilson, editors, Selected Topics in
Graph Theory, volume 2, pages 269–295. Academic Press, London, 1983.
[50] Richard K. Guy. John isbell’s game of beanstalk and John Conway’s game of beans-don’t-talk.
Math. Mag., 59:259–269, 1986.
[51] Richard K. Guy. Fair Game, COMAP Math. Exploration Series. Arlington, MA, 1989.
[52] Richard K. Guy. Fair Game: How to play impartial combinatorial games. COMAP, Inc., 60 Lowell
Street, Arlington, MA 02174, 1989.
[53] Richard K. Guy, editor. Combinatorial Games, Proceedings of Symposia in Applied
Mathematics, volume 43. American Mathematical Society, Providence, RI, 1991. Lecture notes
prepared for the American Mathematical Society Short Course held in Columbus, Ohio, August
6–7, 1990, AMS Short Course Lecture Notes.
[54] Richard K. Guy. Impartial games. In Combinatorial Games (Columbus, OH, 1990), Proc.
Sympos. Appl. Math., volume 43, pages 35–55. Amer. Math. Soc., Providence, RI, 1991.
[55] Richard K. Guy. Mathematics from fun & fun from mathematics; an informal autobiographical
history of combinatorial games. In J. H. Ewing and F. W. Gehring, editors, Paul Halmos:
Celebrating 50 Years of Mathematics, pages 287–295. Springer Verlag, New York, NY, 1991.
[56] Richard K. Guy. Unsolved problems in combinatorial games. In American Mathematical Society
Proceedings of the Symposium on Applied Mathematics, volume 43, 1991. Check my homepage
for a copy, http://www.gustavus.edu/~wolfe.
[57] Richard K. Guy. What is a Game? In Combinatorial Games, Proceedings of Symposia in Applied
Mathematics, volume 43, 1991.
[58] Richard K. Guy. Combinatorial games. In R. L. Graham, M. Grötschel, and L. Lovász, editors,
Handbook of Combinatorics, volume II, pages 2117–2162. North-Holland, Amsterdam, 1995.
Preface | XI

[59] Richard K. Guy. Impartial games. In R. J. Nowakowski, editor, Games of No Chance, Proc.
MSRI Workshop on Combinatorial Games, July, 1994, Berkeley, CA, MSRI Publ., volume 29,
pages 61–78. Cambridge University Press, Cambridge, 1996. Earlier version in: Combinatorial
Games, Proc. Symp. Appl. Math. (R. K. Guy, ed.), Vol. 43, Amer. Math. Soc., Providence, RI,
1991, pp. 35–55.
[60] Richard K. Guy. Unsolved problems in combinatorial games. In R. J. Nowakowski, editor, Games
of No Chance, MSRI Publ., volume 29, pages 475–491. Cambridge University Press, 1996.
[61] Richard K. Guy. What is a Game? In Richard Nowakowski, editor, Games of No Chance, MSRI
Publ., volume 29, pages 43–60. Cambridge University Press, 1996.
[62] Ian Caines, Carrie Gates, Richard K. Guy, and Richard J. Nowakowski. Unsolved problems:
periods in taking and splitting games. Amer. Math. Monthly, 106:359–361, 1999.
[63] Richard K. Guy. Aviezri Fraenkel and combinatorial games. Elect. J. Combin, 8:#I2, 2001.
[64] Richard K. Guy and Richard J. Nowakowski. Unsolved problems in combinatorial games. In
Richard J. Nowakowski, editor, More Games of No Chance, MSRI Publications, volume 42,
pages 457–473. Cambridge University Press, 2002.
[65] Richard K. Guy and Richard J. Nowakowski. Unsolved problems in combinatorial game theory.
In M. H. Albert and R. J. Nowakowski, editors, Games of No Chance 3, MSRI, pages 465–489.
Cambridge Univ. Press, 2009.
[66] Alex Fink and Richard K. Guy. The number-pad game College Math. J., 38:260–264, 2007.
[67] Charles L. Bouton. Nim, a game with a complete mathematical theory. Annals of Mathematics,
3(2):35–39, 1902.
[68] Patrick M. Grundy. Mathematics and games. Eureka, 2:6–8, 1939.
[69] G. A. Mesdal and Paul Ottaway. Simplification of Partizan Games in misère play. INTEGERS,
7:#G06, 2007.
[70] John Milnor. Sums of positional games. In: H. W. Kuhn and A. W. Tucker, eds. Contributions to
the Theory of Games, Vol. 2, Ann. of Math. Stud., volume 28, pages 291–301. Princeton, 1953.
[71] T. E. Plambeck. Taming the wild in impartial combinatorial games. INTEGERS, 5:#G05, 2005.
[72] Roland P Sprague. Über mathematische Kampfspiele. Tôhoku Math. J., 41:438–444, 1935–36.
Contents
Preface | V

Anthony Bonato, Melissa A. Huggan, and Richard J. Nowakowski


The game of flipping coins | 1

Kyle Burke, Matthew Ferland, Michael Fisher, Valentin Gledel, and Craig
Tennenhouse
The game of blocking pebbles | 17

Kyle Burke, Matthew Ferland, and Shang-Hua Teng


Transverse Wave: an impartial color-propagation game inspired by social influence
and Quantum Nim | 39

Alda Carvalho, Melissa A. Huggan, Richard J. Nowakowski, and Carlos Pereira dos
Santos
A note on numbers | 67

Alda Carvalho, Melissa A. Huggan, Richard J. Nowakowski, and Carlos Pereira dos
Santos
Ordinal sums, clockwise hackenbush, and domino shave | 77

Alexander Clow and Stephen Finbow


Advances in finding ideal play on poset games | 99

Erik D. Demaine and Yevhenii Diomidov


Strings-and-Coins and Nimstring are PSPACE-complete | 109

Eric Duchêne, Marc Heinrich, Richard Nowakowski, and Aline Parreau


Partizan subtraction games | 121

Matthieu Dufour, Silvia Heubach, and Anh Vo


Circular Nim games CN(7, 4) | 139

Aaron Dwyer, Rebecca Milley, and Michael Willette


Misère domineering on 2 × n boards | 157

Zachary Gates and Robert Kelvey


Relator games on groups | 171
XIV | Contents

L. R. Haff
Playing Bynum’s game cautiously | 201

Melissa A. Huggan and Craig Tennenhouse


Genetically modified games | 229

Douglas E. Iannucci and Urban Larsson


Game values of arithmetic functions | 245

Yuki Irie
A base-p Sprague–Grundy-type theorem for p-calm subtraction games: Welter’s
game and representations of generalized symmetric groups | 281

Urban Larsson, Rebecca Milley, Richard Nowakowski, Gabriel Renault, and Carlos
Santos
Recursive comparison tests for dicot and dead-ending games under misère
play | 309

Urban Larsson, Richard J. Nowakowski, and Carlos P. Santos


Impartial games with entailing moves | 323

James B. Martin
Extended Sprague–Grundy theory for locally finite games, and applications to
random game-trees | 343

Ryohei Miyadera and Yushi Nakaya


Grundy numbers of impartial three-dimensional chocolate-bar games | 367

Aaron N. Siegel
On the structure of misère impartial games | 389
Anthony Bonato, Melissa A. Huggan, and Richard J. Nowakowski
The game of flipping coins
Abstract: We consider flipping coins, a partizan version of the impartial game turn-
ing turtles, played on lines of coins. We show that the values of this game are num-
bers, and these are found by first applying a reduction, then decomposing the position
into an iterated ordinal sum. This is unusual since moves in the middle of the line do
not eliminate the rest of the line. Moreover, if G is decomposed into lines H and K,
then G = (H : K R ). This is in contrast to hackenbush strings, where G = (H : K).

1 Introduction
In Winning Ways, Volume 3 [3], Berlekamp, Conway, and Guy introduced turning
turtles and considered many variants. Each game involves a finite row of turtles,
either on feet or backs, and a move is to turn one turtle over onto its back, with the
option of flipping a number of other turtles, to the left, each to the opposite of its cur-
rent state (feet or back). The number depends on the rules of the specific game. The
authors moved to playing with coins as playing with turtles is cruel.
These games can be solved using the Sprague–Grundy theory for impartial games
[2], but the structure and strategies of some variants are interesting. The strategy for
moebius (flip up to five coins) played with 18 coins, involves Möbius transformations;
for mogul (flip up to seven coins) on 24 coins, it involves the miracle octad generator
developed by R. Curtis in his work on the Mathieu group M24 and the Leech lattice
[6, 7]; ternups [3] (flip three equally spaced coins) requires ternary expansions; and
turning corners [3], a two-dimensional version where the corners of a rectangle are
flipped, needs nim-multiplication.
We consider a simple partizan version of turning turtles, also played with
coins. We give a complete solution and show that it involves ordinal sums. This is
somewhat surprising since moves in the middle of the line do not eliminate moves at
the end. Compare this with hackenbush strings [2] and domino shave [5].

Acknowledgement: Anthony Bonato was supported by an NSERC Discovery grant. Melissa A. Hug-
gan was supported by an NSERC Postdoctoral Fellowship. The author also thanks the Department of
Mathematics at Ryerson University, which hosted the author while the research took place. Richard J.
Nowakowski was supported by an NSERC Discovery grant.

Anthony Bonato, Department of Mathematics, Ryerson University, Toronto, Ontario, Canada, e-mail:
abonato@ryerson.ca
Melissa A. Huggan, Department of Mathematics and Computer Science, Mount Allison University,
Sackville, New Brunswick, Canada, e-mail: mhuggan@mta.ca
Richard J. Nowakowski, Department of Mathematics and Statistics, Dalhousie University, Halifax,
Nova Scotia, Canada, e-mail: r.nowakowski@dal.ca

https://doi.org/10.1515/9783110755411-001
2 | A. Bonato et al.

We will denote heads by 0 and tails by 1. Our partizan version will be played with
a line of coins, represented by a 0–1 sequence d1 d2 . . . dn , where di ∈ {0, 1}. With this
position, we associate the binary number ∑ni=1 di 2i−1 . Left moves by choosing some pair
of coins di , dj , i < j, where di = dj = 1, and flips them over so that both coins are 0s.
Right also chooses a pair dk , dℓ , k < ℓ, with dk = 0 and dℓ = 1, and flips them over. If j
is the greatest index such that dj = 1, then dk , k > j, will be deleted. For example,

1011 = {0001, 001, 1 | 1101, 111}.

The game eventually ends since the associated binary number decreases with every
move. We call this game flipping coins.
Another way to model flipping coins is to consider tokens on a strip of loca-
tions. Left can remove a pair of tokens, and Right is able to move a token to an open
space to its left. We use the coin flipping model for this game to be consistent with the
literature.
The game is biased to Left. If there are a nonzero even number of 1s in a position,
then Left always has a move; that is, she will win. Left also wins any nontrivial posi-
tion starting with 1. However, there are positions that Right wins. The two-part method
to find the outcomes and values of the remaining positions can be applied to all po-
sitions. First, apply a modification to the position (unless it is all 1s), which reduces
the number of consecutive 1s to at most three. After this reduction, build an iterated
ordinal sum, by successively deleting everything after the third last 1, this deleted po-
sition determines the value of the next term in the ordinal sum. As a consequence,
the original position is a Right win if the position remaining at the end is of the form
0 . . . 01, and the value is given by the ordinal sum.
The necessary background for numbers is in Section 2. Section 3 contains results
about outcomes and also includes our main results. First, we show that the values are
numbers in Theorem 3.2. Next, an algorithm to find the value of a position is presented,
and Theorem 3.3 states that the value given by the algorithm is correct.
The actual analysis is in Section 4. It starts by identifying the best moves for both
players in Theorem 4.2. This leads directly to the core result Lemma 4.5, which shows
that the value of a position is an ordinal sum. The ordinal sum decomposition of G
is found as follows. Let GL be the position after the Left move that removes the right-
most 1s. Let H be the string G \ GL ; that is, the substring eliminated by Left’s move. Let
H R be the result of Right’s best move in H. Now we have that G = GL : H R . In contrast,
the ordinal sums for hackenbush strings and domino shave [5] involve the value of
H not H R .
The proof of Theorem 3.3 is given in Section 4.1. The final section includes a brief
discussion of open problems.
Finally, we pose a question for the reader, which we answer at the end of Sec-
tion 4.1: Who wins 0101011111 + 1101100111 + 0110110110111 and how?
The game of flipping coins | 3

2 Numbers
All the values in this paper are numbers, and this section contains all the necessary
background to make the paper self-contained. For further details, consult [1, 8]. Posi-
tions are written in terms of their options; that is, G = {Gℒ | Gℛ }.

Definition 2.1 ([1, 2, 8]). Let G be a number whose options are numbers, and let GL ,
GR be the Left and Right options of the canonical form of G.
1. If there is an integer k, GL < k < GR , or if either GL or GR does not exist, then G is
the integer, say n, closest to zero that satisfies GL < n < GR .
2. If both GL and GR exist and the previous case does not apply, then G = 2pq , where
q is the least positive integer such that there is an odd integer p satisfying GL <
p
2q
< GR .

The properties of numbers required for this paper are contained in the next three
theorems.

Theorem 2.2 ([1, 2, 8]). Let G be a number whose options are numbers, and let GL and
GR be the Left and Right options of the canonical form of G. If G′ and G′′ are any Left
and Right options, respectively, then

G′ ⩽ GL < G < GR ⩽ G′′ .

Theorem 2.2 shows that if we know that the string of inequalities holds, then we
need to only consider the unique best move for both players in a number.
We include the following examples to further illustrate these ideas:
(a) 0 = { | } = {−9 | } = {− 21 | 47 };
(b) −2 = { | −1} = {− 52 | − 16 31
};
(c) 1 = {0 | } = {0 | 100};
(d) 21 = {0 | 1} = { 83 | 32
17
}.

For games G and H, to show that G ⩾ H, we need to show that G − H ⩾ 0, meaning


that we need to show that G − H is a Left win moving second. For more information,
see Sections 5.1, 5.8, and 6.3 of [1].
Let G and H be games. The ordinal sum of G, the base, and H, the exponent, is

G : H = {Gℒ , G : H ℒ | Gℛ , G : H ℛ }.

Intuitively, playing in G eliminates H, but playing in H does not affect G. For ease of
reading, if an ordinal sum is a term in an expression, then we enclose it in brackets.
Note that x : 0 = x = 0 : x since neither player has a move in 0. We demonstrate
how to calculate the values of other positions with the following examples:
(a) 1 : 1 = {1 | } = 2;
4 | A. Bonato et al.

(b) 1 : −1 = {0 | 1} = 21 ;
(c) 1 : 21 = {0, (1 : 0) | (1 : 1)} = {0, 1 | {1 | }} = {1 | 2} = 32 ;
(d) 21 : 1 = {0, ( 21 : 0) | 1} = {0, 21 | 1} = { 21 | 1} = 43 ;
(e) (1 : −1) : 21 = ( 21 : 21 ) = {0, ( 21 : 0) | 1, ( 21 : 1)} = {0, 21 | 1, 43 } = { 21 | 43 } = 85 .

Note that in all cases, when base and exponent are numbers, the players prefer to play
in the exponent. In the remainder of this paper, all the exponents will be positive.
One of the most important results about ordinal sums was first reported in Winning
Ways.

Theorem 2.3 (Colon Principle [2]). If K ⩾ K ′ , then G : K ⩾ G : K ′ .

The Colon Principle helps prove inequalities that will be useful in this paper.

Theorem 2.4. Let G and H be numbers all of whose options are also numbers, and let
H ⩾ 0.
1. If H = 0, then G : H = G. If H > 0, then (G : H) > G.
2. GL < (G : H L ) < (G : H) < (G : H R ) < GR .

Proof. For item (1), the result follows immediately by Theorem 2.3.
For item (2), if H ⩾ 0 and all the options of G and H are numbers, then GL < G =
(G : 0) ⩽ (G : H L ) < (G : H) < (G : H R ). The second, third, and fourth inequalities hold
since H is a number and thus 0 ⩽ H L < H < H R and by applying the Colon Principle.
To complete the proof, we need to show that (G : H R ) < GR . To do so, we check that
GR − (G : H R ) > 0; in words, we check that Left can always win. Left moving first can
move in the second summand to GR − GR = 0 and win. Right moving first has several
options:
1. Moving to GR − GL > 0, since G and its options are numbers. Hence Left wins.
2. Moving to GR − (G : H RL ) > 0 by induction.
3. Moving to GRR − G : H R , but Left can respond to GRR − GR > 0 since G and its
options are numbers.

In all cases, Left wins moving second. The result follows.

To prove that all the positions are numbers, we use results from [4]. A set of po-
sitions from a ruleset is called a hereditarily closed set of positions of a ruleset if it is
closed under taking options. This game satisfies ruleset properties introduced in [4].
In particular, the properties are called the F1 property and the F2 property, which both
highlight the notion of First-move-disadvantage in numbers and are defined formally
as follows.

Definition 2.5 ([4]). Let S be a hereditarily closed ruleset. Given a position G ∈ S, the
pair (GL , GR ) ∈ Gℒ ×Gℛ satisfies the F1 property if there is GRL ∈ GRℒ such that GRL ⩾ GL
or there is GLR ∈ GLℛ such that GLR ⩽ GR .
The game of flipping coins | 5

Definition 2.6 ([4]). Let S be a hereditarily closed ruleset. Given a position G ∈ S, the
pair (GL , GR ) ∈ Gℒ × Gℛ satisfies the F2 property if there are GLR ∈ GLℛ and GRL ∈ GRℒ
such that GRL ⩾ GLR .

As proven in [4], if given any position G ∈ S, all pairs (GL , GR ) ∈ Gℒ × Gℛ satisfy


one of these properties, then the values of all positions are numbers. Furthermore,
satisfying the F2 property implies satisfying the F1 property, and it was shown that all
positions G ∈ S are numbers if and only if for any G ∈ S, all pairs (GL , GR ) ∈ Gℒ × Gℛ
satisfy the F1 property. Combining these results gives the following theorem.

Theorem 2.7 ([4]). Let S be a hereditarily closed ruleset. All positions G ∈ S are numbers
if and only if for any position G ∈ S, all pairs (GL , GR ) ∈ Gℒ × Gℛ satisfy either the F1 or
the F2 property.

3 Main results
Before considering the values and associated strategies, we consider the outcomes,
that is, we partially answer the question “Who wins the game?” The full answer re-
quires an analogous analysis to finding the values.

Theorem 3.1. Let G = d1 d2 . . . dn . If d1 d2 . . . dn contains an even number of 1s, or if d1 = 1


and there are least two 1s, then Left wins G.

Proof. A Right move does not decrease the number of 1s in the position. Thus, if in G,
Left has a move, then she still has a move after any Right move in G. Consequently,
regardless of d1 , if there are an even number of 1s in G, then it will be Left who reduces
the game to all 0s. Similarly, if d1 = 1 and there are an odd number of 1s, then Left will
eventually reduce G to a position with a single 1, that is, to d1 = 1 and di = 0 for i > 1.
In this case, Right has no move and loses.

The remaining case, d1 = 0 and an odd number of 1s, is more involved. The analy-
sis of this case is the subject of the remainder of the paper. We first prove the following:

Theorem 3.2. All flipping coins positions are numbers.

Proof. Let G be a flipping coins position. If only one player has a move, then the game
is an integer. Otherwise, let L be the Left move to change (di , dj ) from (1, 1) to (0, 0). Let
R be the Right move to change (dk , dℓ ) from (0, 1) to (1, 0). No other digits are changed.
If all four indices are distinct, then both L and R can be played in either order. In this
case, GLR = GRL . Thus the F2 property holds. If there are only three distinct indices,
then two of the bits are ones. If Left moves first, then di = dj = dk = 0. If Right moves
first, then there are still two ones remaining after his move. After Left moves, we have
di = dj = dk = 0, and hence GL = GRL . The F1 property holds.
6 | A. Bonato et al.

There are no more cases since there must be at least three distinct indices. Since
every position satisfies either the F1 or the F2 property, by Theorem 2.7 it follows that
every position is a number.

Given a position G, the following algorithm returns a value.

Algorithm Let G be a flipping coins position. Let G0 = G.

1. Set i = 0.
2. Reductions: Let α and β be binary strings, and either can be empty.
(a) If G0 = α013+j β, j ⩾ 1, then set G0 = α101j β.
(b) If G0 = α013 β with β containing an even number of 1s, then set G0 = α10β.
(c) Repeat until neither case applies; then go to Step 3.
3. If Gi is 0r 1, r ⩾ 0, or 1a 0pi 10qi 1, a ⩾ 0, and pi + qi ⩾ 0, then go to Step 5.
Otherwise, Gi = α01a 0pi 10qi 1, pi + qi ⩾ 1, a > 0, and some α. Set

Qi = 0pi 10qi 1,
Gi+1 = α01a .

Go to Step 4.
4. Set i = i + 1. Go to Step 3.
5. If Gi = 0r 1, then set vi = −r. If Gi = 1a 0pi 10qi 1, then set vi = ⌊ a2 ⌋ + 22p1i +qi . Go to Step 6.
1
6. For j from i − 1 down to 0, set vj = vj+1 : 2pj +q j −1
.
2
7. Return the number v0 .

The algorithm implicitly returns two different results:


1. For Step 3, the substrings Q0 , Q1 , . . . , Qi−1 , Gi partition the reduced version of G;
2. The value v0 .

First, we illustrate the algorithm with the following example. Consider the position
G = 10011110110110111011110011. We highlight at each step which reduction is being
applied to the underlined digits; 2(a) is denoted by †, whereas 2(b) is denoted by ‡.
The algorithm gives that

10011110110110111011110011 = 10011110110110111011110011(†)
= 100111101101101111010011(†)
= 1001111011011101010011(‡)
= 10011110111001010011(‡)
= 100111110001010011(†)
= 1010110001010011.
The game of flipping coins | 7

Step 3 partitions the last expression into 101(011)(000101)(0011) so that the ordinal
sum is given by

1 1 1 1
v0 = (( : ) : ):
2 2 64 8
10257
= .
16348

Now let H = 01001110110111011101. The reductions give that

01001110110111011101 = 01001110110111011101
= 010011101110011101
= 0100111100011101
= 01010100011101.

The last expression partitions into 01(0101)(00011)(101) so that

1 1
v0 = ((−1 : ): ):1
4 32
893
=− .
1024

The next theorem is the main result of the paper.

Theorem 3.3 (Value theorem). Let G be a flipping coins position. If v0 is the value
obtained by the algorithm applied to G, then G = v0 .

In the next section, we derive several results that will be used to prove Theo-
rem 3.3. The proof of Theorem 3.3 will appear in Section 4.1.

4 Best moves and reductions


The proofs in this section use induction on the options. An alternate but equivalent
approach is to regard the techniques as induction on the associated binary number of
the positions. The proofs require detailed examination of the positions, and we will
use notation suitable to the case being considered. Often, a typical position will be
written as a combination of generic strings and the substring under consideration.
For example, 111011000110101 might be parsed as (11101)(100011)(0101) and written
as α100011β or more compactly as α103 12 β.
We require several results before being able to prove Theorem 3.3. We begin by
proving a simplifying reduction, followed by the best moves for each player, and then
the remaining reductions used in the algorithm.
8 | A. Bonato et al.

As an immediate consequence of Theorems 3.2 and 2.2, we have the following:

Corollary 4.1. Let α, β, and γ be arbitrary binary strings. We then have that α1β0γ >
α0β1γ. Moreover, for an integer r ⩾ 0, we have that β10r 1 > β.

Proof. Recall that by Theorem 3.2 all flipping coins positions are numbers. Thus The-
orem 2.2 applies.
A Right option of α0β1γ is α1β0γ, and so we have that α1β0γ > α0β1γ. Similarly, a
Left option of β10r 1 is β, and so we have that β10r 1 > β.

Next, we prove the best moves for each player. Right wants to play the zero furthest
to the right and the 1 adjacent to it. Left wants to play the two ones furthest to the right.

Theorem 4.2. Let G be a flipping coins position, where in G, r and n, r ≠ n, are the
greatest indices such that dr = dn = 1. Let s be the greatest index such that ds = 0. Left’s
best move is to play (dr , dn ), and Right’s best move is to play (ds , ds+1 ).

Proof. We prove this theorem by induction on the options. Note that we use the equiv-
alent binary representation of the game position. If there are three or fewer bits, then,
by exhaustive analysis, the theorem is true.
Let G be d1 d2 . . . dn . We begin by proving Left’s best moves. Let r and n be the two
largest indices, where dr = dn = 1, and thus dk = 0 for r < k < n. Let i and j, i < j, be two
indices with di = dj = 1. We use the notation G(di , dj , dr , dn ) to highlight the salient bits.
The claimed best Left move is from G(1, 1, 1, 1) to G(1, 1, 0, 0). This must be compared to
any other Left move, represented by moving from G(1, 1, 1, 1) to G(0, 0, 1, 1). That is, we
need to show that G(1, 1, 0, 0) − G(0, 0, 1, 1) ⩾ 0.
For the moves to be different, at least three of i, j, r, n are distinct. We first assume
that the four indices are distinct. In this case, we have that i < j < r < n. By applying
Corollary 4.1 twice we have that

G(1, 1, 0, 0) > G(1, 0, 0, 1) > G(0, 0, 1, 1).

We may assume then, without loss of generality, that j = r or j = n. If j = n, then


i < r, since there are two distinct moves. Now consider G(di , dr , dn ) = G(1, 1, 1). By
Corollary 4.1 we have that if j = r, then G(1, 0, 0) > G(0, 0, 1), and if j = n, then
G(1, 0, 0) > G(0, 1, 0).
We now prove Right’s best move. There are more cases to consider. Let s be the
largest index such that ds = 0 and therefore ds+1 = 1. Let i, j, i < j be indices with
di = 0 and dj = 1. The claimed best move is ds , ds+1 , and this must be compared to the
arbitrary Right move di , dj . For the moves to be different, there must be at least three
distinct indices.
The original position is either

G(di , dj , ds , ds+1 ) = G(0, 1, 0, 1), i < s,


The game of flipping coins | 9

or

G(ds , ds+1 , dj ) = G(0, 1, 1), i = s, j > s + 1.

We need to show either D = G(1, 0, 0, 1) − G(0, 1, 1, 0) ⩾ 0 or D = G(1, 1, 0) − G(1, 0, 1) ⩾ 0,


respectively. Suppose Right plays in the first summand of D. Note that, by induction,
the best moves of Left and Right are known.
1. First, suppose j < s. By induction Right’s best move in the first summand of D is
to D′ = G(1, 0, 1, 0) − G(0, 1, 1, 0). Since i < j, it follows that G(1, 0, 1, 0) is a Right
option of G(0, 1, 1, 0), and thus D′ is positive by Corollary 4.1.
2. If j = s + 1, then there are only three distinct indices. The original game is
G(di , ds , ds+1 ) = G(0, 0, 1) and D = G(1, 0, 0) − G(0, 1, 0). Since G(1, 0, 0) is a Right
option of G(0, 1, 0), it follows that D is positive by Corollary 4.1.
3. Suppose j > s + 1.
If i < s, then the original game is of the form

G = αdi βds ds+1 1a dj 1b = α0β011a 11b , a ⩾ 0, b ⩾ 0,

and

D = α1β011a 01b − α0β101a 11b .

Two applications of Corollary 4.1 (applied to the highlighted terms) give

α1β011a 01b ⩾ α0β111a 01b ⩾ α0β101a 11b .

If i = s, then

G = αds ds+1 1a dj 1b = α011a 11b , a ⩾ 0, b ⩾ 0,

and

D = α111a 01b − α101a 11b .

One application of Corollary 4.1 (relevant terms again highlighted) gives

α111a 01b ⩾ α101a 11b .

Thus D ⩾ 0.

Next, we consider Right moving in the second summand of D = G(1, 0, 0, 1)−G(0, 1, 1, 0).
Note that by the choices of the subscripts, dℓ = 1 if n ⩾ ℓ ⩾ s + 1.
10 | A. Bonato et al.

1. If n > s + 2, then Right’s best move in the second summand is to change dn−1 , dn
from (1, 1) to (0, 0). Left copies this move in the first summand, and the resulting
difference game is nonnegative by induction.
2. Suppose n = s + 2.
i. If j < s + 1, then G(di , dj , ds , ds+1 , ds+2 ) = G(0, 1, 0, 1, 1) and D = G(1, 0, 0, 1, 1) −
G(0, 1, 1, 0, 1). Right’s best move is to G(1, 0, 0, 1, 1) − G(0, 1, 0, 0, 0). Left moves
to G(1, 0, 0, 0, 0)−G(0, 1, 0, 0, 0). This is positive by Corollary 4.1, and Left wins.
For the next two subcases, exactly two 1s will occupy two of the four indexed
positions. Since Right is moving in the second summand, he is changing two
1s to two 0s. Thus Left’s best response for each case is to move in the first
summand, bringing the game to G(0, 0, 0, 0) − G(0, 0, 0, 0) = 0, and she wins.
For these cases, we only list the original position. The strategy for both cases
is as just described.
ii. If j = s + 1, then G(di , ds , ds+1 , ds+2 ) = G(0, 0, 1, 1) and D = G(1, 0, 0, 1) −
G(0, 1, 0, 1).
iii. If j = s + 2, then G(di , ds , ds+1 , ds+2 ) = G(0, 0, 1, 1) and D = G(1, 0, 1, 0) −
G(0, 1, 0, 1).
3. Now suppose n = s + 1.
i. If j < s + 1, then let ℓ < s + 1 be the largest index such that dℓ = 1.
If j < ℓ, then we have G(di , dj , dℓ , ds , ds+1 ) = G(0, 1, 1, 0, 1) and D = G(1, 0, 1, 0, 1) −
G(0, 1, 1, 1, 0). Right’s best move is to G(1, 0, 1, 0, 1) − G(0, 1, 0, 0, 0). Left moves
to G(1, 0, 0, 0, 0)−G(0, 1, 0, 0, 0), which is positive since G(1, 0, 0, 0, 0) is a Right
option of G(0, 1, 0, 0, 0).
If j = ℓ, then G(di , dj , ds , ds+1 ) = G(0, 1, 0, 1) and D = G(1, 0, 0, 1) − G(0, 1, 1, 0).
Right’s best move is to G(1, 0, 0, 1) − G(0, 0, 0, 0). Left moves to G(0, 0, 0, 0) −
G(0, 0, 0, 0) = 0, and Left wins.
ii. If j = s + 1, then G(di , ds , ds+1 ) = G(0, 0, 1) and D = G(1, 0, 0) − G(0, 1, 0). This is
positive by Corollary 4.1.

In all cases, Left wins D moving second, proving the result.

Suppose in a position that the bits of the best Right move are different from those
of the best Left move. The next lemma essentially says that the positions before and af-
ter one move by each player are equal. It is phrased in a way that is useful for reducing
the length of the position. Recall that a nontrivial position looks like G = α01a 0p 10q 1β,
where a, p, and q are nonnegative integers, and α and β are arbitrary binary strings.
For the algorithm, it suffices to prove the result for β being empty. However, it is useful,
certainly for a human, to reduce the length of the position as much as possible.

Lemma 4.3. Let α be an arbitrary binary string. If a ⩾ 0, then we have that α01111a =
α101a .
The game of flipping coins | 11

Proof. Let H = α01111a − α101a . We need to show that H = 0. To simplify the proof,
in some cases the second player will play suboptimal moves. We have several cases to
consider.
1. If a ⩾ 2, then playing the same move in the other summand is a good response.
After two such moves, we have either

α01111a−2 − α101a−2 = 0 by induction

or

α10111a − α1101a−1 = α1101a−1 − α1101a−1 = 0 by induction.

2. If a = 1, then H = α01111 − α101. The cases are:


i. Left plays in the first summand to α011 − α101; then Right moves to α101 −
α101 = 0.
ii. Right plays in the second summand to α01111 − α; then Left moves to α011 − α.
Since (α011)L = α, we have α011 > α.
iii. Right plays in the first summand to α10111 − α101; then Left responds to α101 −
α101 = 0.
iv. Left plays in the second summand to α01111−α11; then Right moves to α10111−
α11 = α11 − α11 = 0 by induction.
3. If a = 0, then H = α0111 − α1. There are several cases to consider.
i. If Left or Right plays in the first summand, then the response is in the first
summand giving α1 − α1 = 0.
ii. If Left plays in the second summand, then since there is a Left move, we have
α = β01b , b ⩾ 0. If b > 0, then we have that β01b 0111 − β01b 1, and Left
moves to β01b 013 − β101b . Here Right responds to β101b−1 013 − β101b , which
by induction is equal to β101b−1 1 − β101b = 0. If b = 0, then we have that
β01b 0111−β01b 1 = β00111−β01, and we want to show that Right can win mov-
ing second. Left plays to β00111 − β10, and Right can respond to β01110 − β1,
which, by induction, is equal to β1 − β1 = 0.
iii. Right plays in the second summand. Then for a Right move to exist, α = β10a ,
a ⩾ 0. Thus H = β10a 0111 − β10a 1, and Right moves to β10a 0111 − β. Left
responds by moving to β00a 011 − β. We then have that (β00a 011)L = β, and
thus β00a 011 > β. Hence we find that β00a 011 − β > 0.

In all cases the second player wins H thereby proving the result.

There are reductions that can be applied to the middle of the position, but extra
conditions are needed.
12 | A. Bonato et al.

Lemma 4.4. Let α and β be arbitrary binary strings where either (a) β starts with a 1, or
(b) β starts with 0 and has an even number of 1s. We then have that

α0111β = α10β.

Proof. Let H = α0111β − α10β. We need to show that H = 0. We have several cases to
consider.
1. If β is empty or β = 1a , then H = 0 by Lemma 4.3. Therefore we may assume that β
has at least one 1 and one 0.
2. If β = 1γ1 (β must end in a 1), then in both summands the best moves are pairs of
bits in β and −β. If each player copies the opponent’s move in the other summand,
then this leads to

α0111β − α10β → α0111β′ − α10β′ ,

and the latter expression is equal to 0 by induction.


3. If β ≠ 1γ1, then β = 0γ1, and γ1 has at least two 1s. The best moves are in β and −β
and are the best responses to each other. We then derive that

α0111β − α10β → α0111β′ − α10β′ = 0 by induction.

In all cases, H = 0, and this concludes the proof.


In Lemma 4.4 the conditions are necessary. An example is

3/8 = 011101 ≠ 1001 = 1/4.

Here β starts with a 0 and has an odd number of 1s.


These reduction lemmas are important in evaluating a position. The reduced po-
sitions will end in 011 or 01. By considering the exact end of the string, specifically, if
there are at least two 0s (in one special case, three 0s), then we can find an ordinal
sum decomposition. The decomposition is determined by where the third rightmost 1
is situated.
The next result is the start of the ordinal sum decomposition of a position. The
exponent is the value of the Right option of the substring being removed.

Lemma 4.5. Let α be an arbitrary binary string. If a ⩾ 1 and p and q are nonnegative
integers such that p + q ⩾ 1, then

1
α01a 0p 10q 1 = α01a : .
22p+q−1
Proof. We prove that

1
α01a 0p 10q 1 − (α01a : ) = 0.
22p+q−1
The game of flipping coins | 13

1
Note that in Theorem 2.4 we have that playing in the base of α01a : 22p+q−1 is worse than
playing in the exponent. We have two cases to consider.
1. Left plays first in the first summand, and Right responds in the second summand,
or Right plays first in the second summand, and Left responds in the first sum-
mand. In either case, Right has a move in the exponent (moves to 0) since 2p + q −
1 ⩾ 0. In either order the final position is given by

α01a − (α01a : 0) = α01a − α01a = 0.

2. Right plays first in the first summand, and Left responds in the second summand,
or Left plays first in the second summand, and Right responds in the first sum-
mand. In either case, we consider

1
α01a 0p 10q 1 − (α01a : ).
22p+q−1

We have two subcases.


i. Assume that 2p + q − 1 ≠ 0. After the two moves, we have the position

1
α01a 0r 10s 1 − (α01a : ),
22p+q−2

where 2r + s = 2p + q − 1. By induction we have that

1
α01a 0r 10s 1 = α01a :
22r+s−1
1
= α01a : 2p+q−2 .
2
1
Thus α01a 0r 10s 1 − (α01a : 22p+q−2 ) = 0.
ii. Assume that 2p + q − 1 = 0, that is, q = 1 and p = 0. The original position is

α01a 101 − (α01a : 1).

After the two moves, we have the position α01a 11 − α101a−1 (note that Left has
no move in the exponent). By Lemma 4.3, α01a 11 = α101a−1 . Hence we have
that α01a 11 − α101a−1 = 0, and the result follows.

The values of the positions not covered by Lemma 4.5 are given next.

Lemma 4.6. Let a, p, and q be nonnegative integers. We then have that

a 1
0p 1 = −p, and 1a 0p 10q 1 = ⌊ ⌋ + 2p+q .
2 2
14 | A. Bonato et al.

Proof. Let G = 0p 1. Left has no moves, and Right has p. Note that in 1a , Left has ⌊ a2 ⌋
moves, and Right has none.
Now let G = 1a 0p 10q 1. We proceed by induction on p + q. In all cases, Left’s move
is to 1a , that is, to ⌊ a2 ⌋. If p = 0 and q = 0, then G = 1a 11, which has the value ⌊ a2 ⌋ + 210 =
⌊ a2 ⌋ + 1. Assume that p + q = k, k > 0. If q > 0, then G = {⌊ a2 ⌋ | 1a 0p 10q−1 1}. By induction
we have that

a 󵄨󵄨 a 1 a 1
G = {⌊ ⌋ 󵄨󵄨󵄨 ⌊ ⌋ + 2p+q−1 } = ⌊ ⌋ + 2p+q .
2 󵄨 2 2 2 2

If q = 0, then G = {⌊ a2 ⌋ | 1a 0p−1 101 1}. By induction we have that

a 󵄨󵄨 a 1 a 1
G = {⌊ ⌋ 󵄨󵄨󵄨 ⌊ ⌋ + 2(p−1)+1 } = ⌊ ⌋ + 2p ,
2 󵄨 2 2 2 2

and the result follows.

4.1 Proof of the value theorem


We now have all tools to prove Theorem 3.3.

Proof of Theorem 3.3. Let G be a flipping coins position. Step 2 reduces the binary
string. The reductions in Step 2(a) are those of Lemma 4.3 and Lemma 4.4(a). The re-
ductions in Step 2(b) are those of Lemma 4.4(b). In all cases, these lemmas show that
each new reduced position is equal to G.
In Step 3, we claim Gi ≠ β13 for any β. This is true for i = 0 by Lemma 4.3. If i > 0,
then at each iteration of Step 3, the last two 1s are removed from Gi−1 . Now the original
reduced position would be G0 = β13 γ, where γ has an even number of 1s. Lemma 4.4(b)
would apply eliminating the three consecutive 1s. Now either Gi is one of 0r 1, r ⩾ 0, or
1a 0pi 10qi 1, a ⩾ 0, pi + qi ⩾ 0, or Gi = α01a 0pi 10qi 1, pi + qi ⩾ 1, a > 0. In the latter case
the index is incremented, and the algorithm goes back to Step 3.
Step 5 applies when Step 3 no longer applies, i. e., Gi is one of 0r 1, r ⩾ 0, or
1a 0pi 10qi 1, a ⩾ 0, pi + qi ⩾ 0. Now vi is the value of Gi as given in Lemma 4.6.
Lemma 4.5 shows that for each j < i, Gj = Gj+1 : 2p1j +qj , the evaluation in Step 6.
2
Thus the value of G is v0 , and the theorem follows.

The question “Who wins 0101011111 + 1101100111 + 0110110110111 and how?” from
Section 1 can now be answered.
First, we have that

1 1 1
0101011111 = 01011011 = (01011 : ) = ((01 : ) : )
2 2 2
1 1 11
= ((−1 : ) : ) = − ,
2 2 16
The game of flipping coins | 15

1 3
1101100111 = 1101101 = (1101 : 1) = ( : 1) = ,
2 4
0110110110111 = 0110110111 = 0110111 = 0111 = 0.

Thus we have that


11 3 1
0101011111 + 1101100111 + 0110110110111 = − + +0= .
16 4 16
Left’s only winning move is to

3 3
01010111 + 1101100111 + 0110110110111 = − + + 0 = 0.
4 4
11 5
Her best move in the second summand gives a sum of − 16 + 8
+ 0 = − 161 , and in the
11 3 1 1
third, it gives − 16 + 4 − 8 = − 16 . Left loses both times.

5 Future directions
Natural variants of flipping coins involve increasing the number of coins that can
be flipped from two to three or more. A brief computer search suggests that the only
version where the values are numbers is the game in which Left flips a subsequence
of all 1s and Right flips a subsequence of 0s ended by a 1. We conjecture that a simi-
lar ordinal sum structure will arise in these variants. Other variants have values that
include switches, tinies, minies, and other three-stop games. However, some variants,
when the reduced canonical values are considered, only seem to consist of numbers
and switches. A more thorough investigation should shed light on their structures.

Bibliography
[1] M. H. Albert, R. J. Nowakowski, and D. Wolfe, Lessons in Play: An Introduction to Combinatorial
Game Theory, second edition, A K Peters/CRC Press, Boca Raton, FL, 2019.
[2] E. R. Berlekamp, J. H. Conway, and R. K. Guy, Winning Ways for Your Mathematical Plays, Volume
1, second edition, A K Peters, Ltd., Natick, MA, 2001.
[3] E. R. Berlekamp, J. H. Conway, and R. K. Guy, Winning Ways for Your Mathematical Plays,
Volume 3, second edition, A K Peters, Ltd., Natick, MA, 2003.
[4] A. Carvalho, M. A. Huggan, R. J. Nowakowski, and C. P. dos Santos, A note on numbers, Integers,
to appear.
[5] A. Carvalho, M. A. Huggan, R. J. Nowakowski, and C. P. dos Santos, Ordinal Sums, clockwise
hackenbush, and domino shave, Integers, to appear.
[6] J. H. Conway, The Golay Codes and The Mathieu Groups, in Sphere Packings, Lattices and
Groups. Grundlehren der mathematischen Wissenschaften (A Series of Comprehensive Studies
in Mathematics), vol 290. Springer, New York, NY, 1983.
[7] R. T. Curtis, Eight octads suffice, J. Combin. Theory Ser. A 36 (1984), 116–123.
[8] A. N. Siegel, Combinatorial Game Theory, American Math. Soc., Providence, RI, 2013.
Kyle Burke, Matthew Ferland, Michael Fisher, Valentin Gledel, and
Craig Tennenhouse
The game of blocking pebbles
Abstract: Graph pebbling is a well-studied single-player game on graphs. We intro-
duce the game of blocking pebbles, which adapts Graph Pebbling into a two-player
strategy game to examine it within the context of combinatorial game theory. Positions
with game values matching all integers, all nimbers, and many infinitesimals and
switches are found. This game joins the ranks of other combinatorial games on graphs,
games with discovered moves, and partisan games with impartial movement options.
The computational complexity of the general case is shown to be PSPACE-hard.

1 Introduction
Graph pebbling is an area of current interest in graph theory. In an undirected graph
G, a root vertex r is designated. Heaps of pebbles are placed on the vertices of G, with
a legal move consisting of choosing a vertex v with at least two pebbles, removing two
pebbles, and placing a single pebble on a neighbor of v. The goal is to pebble or place a
pebble on the vertex r. The pebbling number of G, denoted π(G), is the fewest number
of pebbles necessary so that any initial distribution of π(G) pebbles among the vertices
of G, and any vertex of G chosen as the root has a sequence of moves resulting in the
root being pebbled.
Introduced by Chung [5] in 1989, a number of results on pebbling of different fami-
lies of graphs have been found. Of note are pebbling numbers of paths, cycles [13], and
continuing work on a conjecture of Graham’s on the Cartesian products of graphs [5].
Time complexity is also known, both for determination of π(G) and for the minimum
number of moves in a successful pebbling solution, for general graphs. See [9] for a
survey of results in graph pebbling.
The results and language here are in reference to combinatorial game theory
(CGT). The nim sum, also called the digital sum, of nonnegative integers is the result of

Kyle Burke, Dept. of Computer Science, Plymouth State University, Plymouth, New Hampshire, USA,
e-mail: kwburke@plymouth.edu
Matthew Ferland, Dept. of Computer Science, University of Southern California, Los Angeles,
California, USA, e-mail: mferland@usc.edu
Michael Fisher, Dept. of Mathematics, West Chester University, West Chester, Pennsylvania, USA,
e-mail: mfisher@wcupa.edu
Valentin Gledel, Dept. of Computer Science, University Grenoble Alpes, Grenoble, France, e-mail:
valentin.gledel@univ-grenoble-alpes.fr
Craig Tennenhouse, Dept. of Mathematical Sciences, University of New England, Biddeford, Maine,
USA, e-mail: ctennenhouse@une.edu

https://doi.org/10.1515/9783110755411-002
18 | K. Burke et al.

Figure 2.1: A BRG-hackenbush position with blue, red, and green represented by thin, thick black,
and grey lines, respectively.

their sum in binary without carry. This is denoted x1 ⊕x2 if there are only two numbers,
and in the case of more, we use the notation ∑ ⊕xi . For more notation and background
on the computation of CGT game values, we refer the reader to [3, 1].
In Section 2, we introduce a two-player combinatorial ruleset based on graph peb-
bling, with subsequent sections addressing results on both impartial and partisan po-
sitions. This game involves strategic play that results in blocking the moves of one’s
opponent. Amazons is another well-known game, which also involves a notion of
blocking. However, in Amazons the blocking is always permanent (burnt square) or
temporary (queen occupies square). Due to the standard pebbling toll in Blocking
Pebbles, each pebble only has mobility for a finite time.
There are several pebbling games that appear in the literature [10, 14, 11]. The one
which is most similar to the game introduced here was originated by Lagarias and Saks
in 1989 to solve a problem of Erdős. These games do not include the nontoll moves
across an edge in the “wrong direction.” This type of move is unique to Blocking
Pebbles (as far as we are aware). There are also other pebbling games older than that
introduced by Lagarias and Saks [10]. These games bear no resemblance to Blocking
Pebbles and are used to study graph algorithm complexity.

2 Ruleset and play


A game of blocking pebbles consists of a directed acyclic graph G and a 3-tuple
(b, r, g) at each vertex of G, representing the numbers of blue, red, and green pebbles.
Left may move blue and green pebbles, whereas Right may move red and green. This
follows one convention of BRG-hackenbush (see Fig. 2.1) wherein players may remove
an edge of their own color or the neutral color green. In BRG-hackenbush all dyadic
The game of blocking pebbles | 19

(2,0,0)
A

B C D

(0,1,1) (0,0,0) (0,1,1)

(0,0,0) (2,0,0)

(0,1,1) (1,0,0) (0,1,1) (0,1,1) (0,0,1) (0,1,0)

Figure 2.2: A position in blocking pebbles and two of Left’s options.

rationals and nimbers are achievable game values. In addition, when allowing for in-
finite positions, all real numbers and ordinals are achievable values, but switches are
not. By contrast, in blocking pebbles players may move any number of pebbles at a
single vertex within certain constraints on the graph and pebble distribution. In this
way, blocking pebbles is similar to graph nim [8, 4].

Ruleset 1. Given a tuple of the form (b, r, g) at each vertex of a directed acyclic graph
G, Left can make one of the following two moves from the vertex v.
1. Move a positive number of blue and/or green pebbles from v to an in-neighbor
of v.
2. Remove two blue and/or green pebbles from v and place one on an out-neighbor
of v and discard the other.

No blue pebbles can be moved to a vertex with a nonzero number of red pebbles. Right
has the obvious symmetric moves.
Play proceeds following the normal play convention, where the last player to make
a legal move wins.

Note that if Left removes one blue and one green pebble from v, then she may
add the green to v’s out-neighbor. However, it is always preferable to instead add the
blue as this results in a position with more blue pebbles and increases the number of
vertices blocked by Left.
As an example, consider the position in Figure 2.2. At the top is a position in block-
ing pebbles. Note that Left cannot move any blue pebbles from vertex A to B since B
already contains a red pebble. However, Left can move a single blue pebble from A to
C at a cost of one blue pebble. She can also move the one green pebble from D to C.
20 | K. Burke et al.

An interesting property of this ruleset is the existence of discovered moves, similar


to discovered attacks in chess. A player may be unable to move at one point in the
game, but after their opponent moves, the game is once again playable by the first
player. As an example, consider a simple out-star with two red pebbles on the source
and a single blue pebble on a sink node. Left has no moves, but once Right moves, Left
can move their pebble to the source. The presence of discovered moves precludes this
being a strong placement game. For more on these types of games, see [7] and [12].

3 Blue-red-green blocking pebbles


In this section, we will address some families of game values that are achievable in
blocking pebbles. We will only address finite graphs, and hence we will not en-
counter nondyadic rationals. This is similar to BRG-hackenbush, described in Sec-
tion 1. Due to the complexity of analysis, we will also restrict our graphs to orientations
of stars, paths, and small graphs.
We begin with a simple result.

Theorem 3.1. For every k ∈ ℤ, there is a position in blocking pebbles with value k.

Proof. Let G be a single arc directed from u to v. If k > 0, then place 2k blue pebbles
and a single red pebble on u, and no pebbles on v. Switch red and blue pebbles if
k < 0. This allows for k-many moves for Left by moving blue pebbles from u to v, but
the presence of a red pebble on u prevents moving any blue pebbles in the reverse
direction. Zero is trivially achieved by a graph with no pebbles or by any number of
other pebble distributions.
Regarding infinitesimals, ↓ is realized by an out-star with two leaves; that is, a ver-
tex u with out-neighbors v1 , v2 . Vertex v1 has a blue pebble, and v2 has one red and one
green pebble. Left can move the blue or green pebble to u, which is simple to identify
as ∗. Right, however, can move the green to the source vertex u resulting in ∗, the red
to u resulting in zero, or both red and green pebbles to u, which is also a zero position.
The initial position is {∗ | 0, ∗} = ↓.
Due to the blocking rule, blocking pebbles is relatively unique among partisan
combinatorial games. In BRG-hackenbush the presence of a move for one player does
not inhibit moves for the other. In clobber, another two-player partisan combinato-
rial game (see [2]), the presence of a red piece actually encourages movement for Left,
and vice versa. This is a property common to all dicot games. However, in blocking
pebbles a single well-placed blue pebble, for example, can cut off many of Right’s
moves. The only other well-known ruleset with this property appears to be Amazons,
which does not allow for discovered moves. It is natural, then, that many positions
result in game values that are switches.
The game of blocking pebbles | 21

Parts (1) and (4) of the next result show that every integer switch is achievable
with a specified pebbling configuration on the out-star K1,2 .
In the following lemma, we use the following notation for a bLue/Red pebbling
configuration of the out-star K1,2 : [(a, b), [c, d], [e, f ]] is the configuration with a blue
pebbles and b red pebbles on the central vertex, c blue pebbles and d red pebbles on
one of the pendant vertices, and e blue pebbles and f red pebbles on the other pendant
vertex.

Lemma 3.2. The following results pertain to a given blocking pebbles configuration
on the out-star K1,2 .
1. For c ≥ 1, the position [(a, b), [0, c], [0, 0]] has value −⌊ b2 ⌋ if a = 1 and value {⌊ a2 ⌋ −
1 | ⌊ a−b
2
⌋ + 1} if a ≥ 2,
2. for a, b, c, d ≥ 1, the position [(a, b), [c, 0], [0, d]] has value ⌊ a−b
2
⌋,
a−1
3. for a, b, c, d, e ≥ 1, the position [(a, b), [c, 0], [d, e]] has value ⌊ 2 ⌋,
4. for a, b, c, d ≥ 1, the position [(0, 0), [a, b], [c, d]] has value {a + c − 1 | − (b + d − 1)},
5. for a, b, c ≥ 1, the position [(0, 0), [a, b], [0, c]] has value {a − 1 | − (3(b + c) − 5)},
6. for a, b ≥ 1, the position [(0, 0), [a, b], [0, 0]] has value {3a − 5 | − (3b − 5)},
7. for b ≥ 1, [(1, 0), [0, b], [0, 0]] and [(2, 0), [0, b], [0, 0]] are both zero positions.

Proof. For Case (1), the position [(1, 1), [c, 0], [0, 0]] is the zero position. It is also read-
ily checked that the position [(1, 2), [c, 0], [0, 0]] has value 0.
If b > 2, then Left has no move from [(1, b), [c, 0], [0, 0]]. From [(1, b), [c, 0], [0, 0]]
Right may move to the position [(1, b−2), [c, 0], [0, 1]], which has value ⌊ −b+1 2
⌋ = −⌊ b2 ⌋+1
by induction. Hence [(1, b), [c, 0], [0, 0]] has value −⌊ b2 ⌋ as required.
If a ≥ 2, then Left’s best move from [(a, b), [c, 0], [0, 0]] is to [(a − 2, b), [c, 0], [1, 0]],
which has value ⌊ a−2 2
⌋ (Right has no move from this position, and Left has ⌊ a−2 2

moves). Right’s only move is to [(a, b − 2), [c, 0], [0, 1]], which has value ⌊ a−b+2 2
⌋, also
by induction. Hence [(a, b), [c, 0], [0, 0]] has value {⌊ a2 ⌋ − 1|⌊ a−b
2
⌋ + 1} when a ≥ 2.
For Case (2), it is clear that the position [(1, 1), [c, 0], [0, d]] is a zero position. If
a ≥ 2, then from [(a, 1), [c, 0], [0, d]] Left has a move to [(a − 2, 1), [c + 1, 0], [0, d]], and
Right has no move. Thus [(a, 1), [c, 0], [0, d]] has value ⌊ a−1 2
⌋ by induction.
A similar argument establishes the claim that [(1, b), [c, 0], [0, d]] has value ⌊ 1−b 2
⌋.
Now if a, b ≥ 2, then from [(a, b), [c, 0], [0, d]] Left has the move to [(a − 2, b), [c +
1, 0], [0, d]], and Right has the move to [(a, b − 2), [c, 0], [0, d + 1]]. By induction we see
that [(a, b), [c, 0], [0, d]] has value

a − 2 − b 󵄨󵄨󵄨 a − b + 2 a−b 󵄨󵄨 a − b a−b


{⌊ ⌋ 󵄨󵄨 ⌊ ⌋} = {⌊ ⌋ − 1 󵄨󵄨󵄨 ⌊ ⌋ + 1} = ⌊ ⌋.
2 󵄨 2 2 󵄨 2 2

For Case (3), note that if a = 1, then there are no moves for either player; the
formula given correctly yields the game value 0. If a = 2, then from the position
[(2, b), [c, 0], [d, e]] Left has the move to [(0, b), [c + 1, 0], [d, e]]. From here Left has no
22 | K. Burke et al.

a c

Figure 2.3: A transitive triple graph.

move, and Right has e moves. Thus the position [(0, b), [c + 1, 0], [d, e]] has value −e.
Hence [(2, b), [c, 0], [d, e]] has value 0, as required.
If a > 2, then from [(a, b), [c, 0], [d, e]] Left can move to [(a − 2, b), [c + 1, 0], [d, e]],
which has value ⌊ a−3 2
⌋ = ⌊ a−1
2
⌋ − 1 by induction. Right has no moves from [(a, b), [c, 0],
[d, e]]. Hence [(a, b), [c, 0], [d, e]] has value

a−1 󵄨󵄨 a−1
{⌊ ⌋ − 1 󵄨󵄨󵄨} = ⌊ ⌋,
2 󵄨 2

as desired.
For Case (4), note that Left’s only move from [(0, 0), [a, b], [c, d]] is to [(1, 0), [a −
1, b], [c, d]]. This last position has value a + c − 1 by induction. Similarly, Right’s only
move from [(0, 0), [a, b], [c, d]] is to [(0, 1), [a, b−1], [c, d]]. This position has value −(b+
d − 1) by induction. It now follows that [(0, 0), [a, b], [c, d]] has value

{a + c − 1 | −(b + d − 1)}.

Cases (5) and (6) follow from the previous result, and Case (7) is trivial.

We now consider a transitive 3-cycle graph (Fig. 2.3) with vertices a, b, c and arcs
ab, ac, and bc. The pebbling configurations considered below are written so that the
first array entry corresponds to the source vertex a, the second corresponds to b, and
the third to the sink vertex c.
An interesting result concerning the transitive 3-cycle crops up from the somewhat
unnatural starting position where 1 blue pebble and k red pebbles occupy the same
starting vertex. Specifically, we prove the following:

Theorem 3.3. For k > 1, the pebbling configuration on the transitive 3-cycle given by
[[0, 0], [0, 0], [1, k]] has game value (3 − 3k)+k−4 .

We see the game tree of the base case in Figure 2.4.


To prove this result, we consider several positions, which arise as subpositions of
the above pebbling configuration.

Lemma 3.4. Consider the following pebbling configurations of the transitive 3-cycle T.
Then the position
The game of blocking pebbles | 23

[[0, 0], [0, 0], [1, 1]]

[[1, 0], [0, 0], [0, 1]] [[0, 1], [0, 0], [1, 0]]

[[0, 0], [1, 0], [0, 1]] [[0, 0], [0, 1], [1, 0]]

[[0, 1], [1, 0], [0, 0]] [[0, 1], [1, 0], [0, 0]] [[0, 1], [0, 0], [1, 0]]

[[1, 0], [0, 0], [0, 1]] [[1, 0], [0, 1], [0, 0]] [[1, 0], [0, 1], [0, 0]]

[[1, 0], [0, 1], [0, 0]] [[0, 1], [1, 0], [0, 0]]

Figure 2.4: Game tree of the position [[0, 0], [0, 0], [1, 1]] on the transitive 3-cycle.

1. [[1, 0], [0, j], [0, k]] has value −3k − 2j + 2 if at least one of j or k is ≥ 1;
2. [[0, j], [1, 0], [0, k]] has value −3k − 2j + 3 if j, k ≥ 1;
3. [[0, 0], [1, 0], [0, k]] has value −3k + 3 if k ≥ 2 and value − 21 if k = 1;
4. [[0, j], [1, 0], [0, 0]] has value −2j + 3 if j ≥ 2 and value 0 if j = 1;
5. [[0, j], [0, k], [1, 0]] has value −3k − 2j + 4 if j, k ≥ 1;
6. [[0, j], [0, 0], [1, 0]] has value −2j + 4 if j ≥ 2 and value 1 if j = 1;
7. [[0, 0], [0, k], [1, 0]] has value {−2k + 2 | −3k + 5} if k ≥ 2 and value 21 if k = 1;
8. [[0, 0], [0, j], [1, k]] has value {−3k − 2j + 2 | −4k − 3j + 5} if j ≥ 2 and k ≥ 1 and value
{−3k | −4k + 3} if j = 1 and k ≥ 1;
9. [[0, j], [0, 0], [1, k]] has value {−3k − 2j + 3 | −4k − 2j + 5} if j, k ≥ 1; and
10. [[0, ℓ], [0, j], [1, k]] has value −4k − 3j − 2ℓ + 4 if j, k, ℓ ≥ 1.

Proof. All claims will be proven simultaneously using induction (on the height of the
game tree). Base cases are easily checked and left to the interested reader.

Case (1): From [[1, 0], [0, j], [0, k]] Left has no move; Right’s best move is to [[1, 0],
[0, j + 1], [0, k − 1]]. By induction this position has value −3k − 2j + 3 by (1). Hence
[[1, 0], [0, j], [0, k]] has value

{ | −3k − 2j + 3} = −3k − 2j + 2,

as desired.

Case (2): Left again has no move from the starting position. Right’s best move is to
[[0, j + 1], [1, 0], [0, k − 1]]. If k = 1, then this position has value −2j + 1 by (4); if k ≥ 2,
then this position has value −3k − 2j + 4 by (2). In either case, [[0, j], [1, 0], [0, k]] has
value −3k − 2j + 3.

Case (3): First, suppose that k ≥ 2. Left can move to [[1, 0], [0, 0], [0, k]] from
[[0, 0], [1, 0], [0, k]]. From (1), this position has value −3k + 2. Right’s best move is
24 | K. Burke et al.

to [[0, 1], [1, 0], [0, k − 1]] with value −3k + 4. Hence [[0, 0], [1, 0], [0, k]] has value

{−3k + 2 | −3k + 4} = −3k + 3.

If k = 1, then Left’s only move becomes [[1, 0], [0, 0], [0, 1]], which has value
−1 by (1), and Right’s only move is to [[0, 1], [1, 0], [0, 0]] with value 0 by (4). Thus
[[0, 0], [1, 0], [0, 1]] has value − 21 .

Case (4): For j ≥ 3, Left has no move from [[0, j], [1, 0], [0, 0]], and Right can move
to [[0, j − 2], [1, 0], [0, 1]] with value −2j + 4, by (2), giving [[0, j], [1, 0], [0, 0]] the game
value of −2j + 3.
If j = 2, then Right’s move is to [[0, 0], [1, 0], [0, 1]] with value − 21 . Thus [[0, 2], [1, 0],
[0, 0]] has a value of −1(= −2 ⋅ 2 + 3).
Finally, if j = 1, then neither Left nor Right has a move from [[0, 1], [1, 0], [0, 0]],
and so its value is 0.

Case (5): Let k ≥ 2. Left has no move, and Right can move to [[0, j + 1], [0, k −
1], [1, 0]] (Right’s best move). By induction this position has value −3k − 2j + 5 giving
[[0, j], [0, k], [1, 0]] the value −3k − 2j + 4.
If k = 1, then, again, Left has no move. However, Right can move to [[0, j +
1], [0, 0], [1, 0]]. By (6) this position has value −2j + 2, thus giving [[0, j], [0, 1], [1, 0]] the
value −2j + 1.

Case (6): First, we consider the case j > 2. Left’s only move is to [[0, j], [1, 0], [0, 0]].
By (4) this position has value −2j + 3. Right’s only move is to [[0, j − 2], [0, 1], [1, 0]]. By
(5) this position has value −2j + 5. Hence, if j > 2, then [[0, j], [0, 0], [1, 0]] has value
−2j + 4.
If j = 2, [[0, 2], [1, 0], [0, 0]] has value −1 by (4). Right’s move to [[0, 0], [0, 1], [1, 0]]
has value 21 by (7). Thus [[0, 2], [0, 0], [1, 0]] has value 0.
Finally, if j = 1, then Left’s move [[0, 1], [1, 0], [0, 0]] has value 0, and Right has no
moves. Thus [[0, 1], [0, 0], [1, 0]] has value 1.

Case (7): If k ≥ 2, Left’s move to [[1, 0], [0, k], [0, 0]] has value −2k + 2 by (1). In
this case, Right’s move to [[0, 1], [0, k − 1], [1, 0]] has value −3k + 5 by (5). Therefore
[[0, 0], [0, k], [1, 0]] has value

{−2k + 2 | −3k + 5}.

If k = 1, then [[1, 0], [0, 1], [0, 0]] has value 0, and [[0, 1], [0, 0], [1, 0]] has value 1.
Hence [[0, 0], [0, 1], [1, 0]] has value 21 .

Case (8): First suppose j ≥ 2 and k ≥ 2. Then Left’s move to [[1, 0], [0, j], [0, k]] has
value −3k − 2j + 2 by (1). Right has two sensible moves: one to [[0, 1], [0, j], [1, k − 1]] and
The game of blocking pebbles | 25

one to [[0, 1], [0, j − 1], [1, k]]. The former has value −4k − 3j + 6 by (10), and the latter
has value −4k − 3j + 5, also by (10). Thus [[0, 0], [0, j], [1, k]] has value

{−3k − 2j + 2 | −4k − 3j + 5}.

Next, we look at the case where j ≥ 2 and k = 1. Left’s move to [[1, 0], [0, j], [0, 1]]
has value −2j − 1 by (1). Right’s move to [[0, 1], [0, j], [1, 0]] has value −3j + 2 by (5).
Right’s move to [[0, 1], [0, j − 1], [1, 1]] has value −3j + 1 by (10). Hence [[0, 0], [0, j], [1, 1]]
has value

{−2j − 1 | −3j + 1}.

We now consider the case j = 1 and k ≥ 2. Left’s only move is to [[1, 0], [0, 1], [0, k]].
This position has value −3k by (1). Right’s move to [[0, 1], [0, 0], [1, k]] has value {−3k +
1 | −4k + 3} by (9), and his move to [[0, 1], [0, 1], [1, k − 1]] has value −4k + 3 by (10).
Therefore the position [[0, 0], [0, 1], [1, k]] has value

{−3k | {−3k + 1 | −4k + 3}, −4k + 3}.

It can be shown that the option {−3k + 1 | −4k + 3} is reversible. Hence the canonical
form of the position [[0, 0], [0, 1], [1, k]] has value

{−3k | −4k + 3}.

Finally, we consider the case j = 1 and k = 1. Left’s move from [[0, 0], [0, 1], [1, 1]]
to [[1, 0], [0, 1], [0, 1]] has value −3 by (2). Right’s move to [[0, 1], [0, 0], [1, 1]] has value
{−2 | −1} = − 32 . The move to [[0, 1], [0, 1], [1, 0]] has value −1 by (5). Thus the position
[[0, 0], [0, 1], [1, 1]] has value {−3 | − 32 } = −2.

Case (9): First, suppose that k ≥ 2. Then Left’s move from [[0, j], [0, 0], [1, k]] to
[[0, j], [1, 0], [0, k]] has value −3k −2j+3 by (2). Right’s best move is to [[0, j], [0, 1], [1, k −
1]] with value −4k − 2j + 5, thus giving the position [[0, j], [0, 0], [1, k]] the game value
of

{−3k − 2j + 3 | −4k − 2j + 5}.

If k = 1, then Left’s only move has value −2j, again by (2). Right’s move to
[[0, j], [0, 1], [1, 0]] has value −2j + 1 by (5). Hence [[0, j], [0, 0], [1, 1]] has value

{−2j | −2j + 1} = (−4j + 1)/2.

Case (10): Let j = 1 and k ≥ 2. Left has no move from this starting position, and
Right has three sensible moves. Right can move to [[0, ℓ + 1], [0, 0], [1, k]] with value
{−3k − 2ℓ + 1 | −4k − 2ℓ + 3} by (9), or to [[0, ℓ + 1], [0, 1], [1, k − 1]] with value −4k − 2ℓ + 3
26 | K. Burke et al.

by (10), or to [[0, ℓ], [0, 2], [1, k]] with value −4k −2ℓ+2 by (10). The last move is optimal
for Right, and hence the position [[0, ℓ], [0, 1], [1, k]] has game value −4k − 2ℓ + 1, as
required.
If j = 1 and k = 1, then Left has no move from [[0, ℓ], [0, 1], [1, 1]], and Right has
again three sensible moves. Right’s move to [[0, ℓ+1], [0, 0], [1, 1]] has value (−4ℓ−3)/2
by (9), Right’s move to [[0, ℓ], [0, 2], [1, 0]] has value −2ℓ − 2 by (5), and Right’s move to
[[0, ℓ + 1], [0, 1], [1, 0]] has value −2ℓ − 1 by (5). Therefore the position [[0, ℓ], [0, 1], [1, 1]]
has value −2ℓ − 3.
If j ≥ 2 and k = 1, then Left has no move from [[0, ℓ], [0, j], [1, 1]], and Right has
three moves, each not costing a pebble to make: Right can move to [[0, ℓ+1], [0, j], [1, 0]]
with value −3j−2ℓ+2 by (5), Right can move to [[0, ℓ], [0, j+1], [1, 0]] with value −3j−2ℓ+1
by (5), and Right can move to [[0, ℓ + 1], [0, j − 1], [1, 0]] with value −3j − 2ℓ + 1 by (10).
Hence the position [[0, ℓ], [0, j], [1, 1]] has value −3j − 2ℓ.
Finally, if j ≥ 2 and k ≥ 2, then, as in every other subcase, Left has no move.
Right has his usual three moves: Right can move to [[0, ℓ + 1], [0, j], [1, k − 1]] with value
−4k − 3j − 2ℓ + 6 by (10), to [[0, ℓ], [0, j + 1], [1, k − 1]] with value −4k − 3j − 2ℓ + 5, or to
[[0, ℓ + 1], [0, j − 1], [1, k]] with value −4k − 3j − 2ℓ + 5. Thus [[0, ℓ], [0, j], [1, k]] has game
value −4k − 3j − 2ℓ + 4.

With Lemma 3.4 in hand, we can now prove Theorem 3.3.

Proof. Left has two moves from the starting position [[0, 0], [0, 0], [1, k]]: Left can move
to [[1, 0], [0, 0], [0, k]] with value −3k + 2 by Lemma 3.4(1) or to [[0, 0], [1, 0], [0, k]] with
value −3k + 3 by 3.4(3). The latter move is clearly the optimal move for her.
There are two types of moves that Right can make: Right can move to [[0, ℓ], [0, 0],
[1, k − ℓ]], where 1 ≤ ℓ ≤ k, or to [[0, 0], [0, j], [1, k − j]], where 1 ≤ j ≤ k.
First suppose that Right moves to [[0, ℓ], [0, 0], [1, k − ℓ]], where 1 ≤ ℓ < k. This
position has value

{−3k + ℓ + 3 | −4k + 2ℓ + 5} = (−3k + ℓ + 3) + {0 | −k + ℓ + 2}

by Lemma 3.4(9).
Next, suppose that Right moves to [[0, k], [0, 0], [1, 0]]. This position has value

−2k + 4

by Lemma 3.4(6).
We now consider the other type of move for Right. Suppose that Right moves to
[[0, 0], [0, j], [1, k − j]], where 1 < j < k. This position has value

{−3k + j + 2 | −4k + j + 5} = (−3k + j + 2) + {0 | −k + 3}

by Lemma 3.4(8).
The game of blocking pebbles | 27

Next, suppose that Right moves to [[0, 0], [0, 1], [1, k − 1]]. This position has value

{−3k + 3 | −4k + 7} = (−3k + 3) + {0 | −k + 4}

by Lemma 3.4(8).
Finally, suppose that Right moves to [[0, 0], [0, k], [1, 0]]. This position has value

{−2k + 2 | −3k + 5} = (−2k + 2) + {0 | −k + 3}

by Lemma 3.4(7).
We will now show that the move to [[0, 0], [0, 1], [1, k − 1]] is Right’s optimal move.
First note that since {0 | −k + 4} ≤ 1, it follows that

(−3k + 3) + {0 | −k + 4} ≤ −3k + 4 < −2k + 4.

Next, observe that if k = 2, then

3
(−3 ⋅ 2 + 3) + {0 | −2 + 4} = −2 < − = (−2 ⋅ 2 + 2) + {0 | −2 + 3}.
2

If k ≥ 3, then −k + 2 < {0 | −k + 3}, and so it follows that

−3k + 4 = (−2k + 2) + (−k + 2) < (−2k + 2) + {0 | −k + 3}.

Hence

(−3k + 3) + {0 | −k + 4} < (−2k + 2) + {0 | −k + 3}

for k ≥ 2.
To show that

(−3k + 3) + {0 | −k + 4} < (−3k + ℓ + 3) + {0 | −k + ℓ + 2}, ℓ ≥ 1,

it suffices to show that {ℓ | −k +2ℓ+2}+{k −4 | 0} > 0. To this end, note that Left’s move
to ℓ + {k − 4 | 0} is a winning first move. Right’s move to −k + 2ℓ + 2 + {k − 4 | 0} leads to
(−k + 2ℓ + 2) + (k − 4) = 2ℓ − 2 ≥ 0 after Left’s response. Right’s move to {ℓ | −k + 2ℓ + 2} + 0
is not better, leading to ℓ + 0 = ℓ ≥ 1.
Our last task is showing that

(−3k + 3) + {0 | −k + 4} < (−3k + j + 2) + {0 | −k + 3} for j > 1.

This can be established by showing that {j − 1 | −k + j + 2} + {k − 4 | 0} > 0. The


proof of this fact is virtually identical to that of the similar statement in the preceding
paragraph, and so it will be omitted.
28 | K. Burke et al.

It now follows that the value of the position [[0, 0], [0, 0], [1, k]] is

{−3k + 3 | −3k + 3 + {0 | −k + 4}} = (−3k + 3) + {0 | {0 | −(k − 4)}} = (−3k + 3) +k−4 .

In the table below, we present, without proof, other interesting game values
achievable as bLue/Red Blocking Pebbles positions.

Underlying Digraph Pebbling Configuration Game Value


Transitive 3-Cycle [[1, 0], [2, 4], [0, 0]] 1/4
Transitive 3-Cycle [[3, 1], [0, 0], [0, 1]] 1/2
Transitive 3-Cycle [[2, 3], [0, 0], [1, 0]] 3/4
Transitive 3-Cycle [[4, 4], [0, 0], [0, 0]] ±1/2
Transitive 3-Cycle [[3, 5], [0, 0], [1, 0]] ↑∗
Transitive 3-Cycle [[3, 5], [2, 0], [0, 0]] ↑[2] ∗
Directed P3 [[0, 0], [2, 2], [0, 0]] ∗2

We end this section with a short discussion of the differences between blocking peb-
bles and BRG-hackenbush.
As noted above, the blocking mechanic of blocking pebbles results in a prepon-
derance of switches, whereas BRG-hackenbush has no such positions. Also, while we
would be surprised to find a dyadic that is not the game value for some blocking peb-
bles position, we have found many dyadic rationals difficult to construct, even with
the use of computational methods. BRG-hackenbush positions, on the other hand,
are easily constructed that have rational noninteger game values.

3.1 Green-only games


The game of Blocking Pebbles restricted to green pebbles is an impartial game,
with positions admitting only nimbers as game values. The interested reader will seek
out [3, 1] for more on Sprague–Grundy theory and nimbers. Whereas there is no use for
players to employ a blocking strategy, the game remains mathematically interesting
for its connections to its roots in graph pebbling.
First, we consider in-stars and out-stars, with green pebble distributions denoted
by ⟩g0 , g1 , . . . , gn ⟨ and ⟨g0 , g1 , . . . , gn ⟩, respectively. In each case, gi ≥ 0, and g0 is the
number of pebbles on the center vertex.

Theorem 3.5. The value of an in-star with distribution ⟩g0 , g1 , . . . , gn ⟨ is ∗g0 .

Proof. We will demonstrate this using induction on g0 . First, note that if g0 = 0, then
any move of a green pebble to the center from a leaf, resulting in the loss of a pebble,
The game of blocking pebbles | 29

can be countered by returning it to the same leaf. Next, we note that any move from
⟩g0 , g1 , . . . , gn ⟨ results in a change to g0 and that there is a move from this position that
results in any number of pebbles on the center node strictly less than g0 . Hence the
in-star is equivalent to a nim heap of size g0 .

The nim dimension of a ruleset is the greatest integer k where a position in the
ruleset has value ∗2k−1 but no position has value ∗2k . A ruleset in which the nim di-
mension is unbounded is said to have infinite nim dimension, as Santos and Silva [6]
showed is true for konane. Theorem 3.5 implies that green blocking pebbles also
has infinite nim dimension, whereas the nim dimension of blue-red blocking peb-
bles is still unknown.
The fact revealed in Theorem 3.5 that an in-star is equivalent to a single nim heap
can be generalized to multiple heaps with an out-star.

Theorem 3.6. The value of an out-star with distribution ⟨g0 , g1 , . . . , gn ⟩ is ∗(∑ni=1 ⊕gi ),
that is, the nim sum of all heaps.

Proof. We note that this game is analogous to nim, except that instead of removing
pebbles from a heap, they are moved to the center at no cost. The player with the ad-
vantage simply plays the winning Nim strategy. Any move of a pebble from the center
vertex to a leaf can immediately be reversed at a net cost of one pebble from the cen-
ter. Thus the number of pebbles at the center does not contribute to the game value,
which equals the nim sum of the leaf heaps.

On a path, we get a similar result.

Theorem 3.7. If (g1 , . . . , gn ) is a distribution of green pebbles along a path directed left
to right, then the game value is ∗(∑ ⊕g2k ).

Proof. An empty path is trivial, so let us assume that the claim is false and consider
the set C of all counterexamples with the fewest total number of pebbles. From C
let (g01 , g02 , . . . , g0n ) be the last when ordered lexicographically. Any move from this
position either decreases the total number of pebbles or increases its lexicographic
position. Therefore all options of (g01 , g02 , . . . , g0n ) are outside C, and hence the claim
holds for them. Since each has a digital sum of even terms that differs from (∑⊕ g2k ),
and all smaller sums are realized through nim moves on the even heaps, we see
that (g01 , g02 , . . . , g0n ) also satisfies the claim. Therefore, C is empty, and the claim is
true.

Note that in Theorems 3.5, 3.6, and 3.7 the strategy is equivalent to nim. In fact, in
these particular cases, blocking pebbles is very similar to the game of poker nim,
wherein players make nim moves but retain any removed pebbles, and may add them
to a heap instead of removing. Although poker nim is loopy and blocking pebbles
is not, both games played optimally have the same strategy and the same reciprocal
moves for non-nim moves.
30 | K. Burke et al.

Figure 2.5: An oriented tree T and the resulting graph D(T ) from Construction 3.8.

We now introduce a reduction formula for all trees, which can be applied to the three
previous results.

Construction 3.8. Let T be any oriented tree with a given distribution of green peb-
bles, let S be its set of source vertices, and let O be the set of vertices of T reachable by
an odd length directed path from some vertex in S. Additionally, for a given subset W
of vertices, let p(W) be the combined total number of pebbles on W.
We construct the following digraph D(T) from T as follows (see Figure 2.5):
1. V(D(T)) = {σ} ∪ O, where O has the same pebbling distribution as it does in T, and
σ is an vertex with no pebbles.
2. E(D(T)) = E(O) ∪ {σ → θ | θ ∈ O}.

Proposition 3.9. The game value of blocking pebbles on T is equal to the game value
on D(T).

Proof. The key observation is that the pebbling games on T and D(T) are both equiv-
alent to poker nim on the set O. Since the two games have the same set of nim moves,
their game values are equivalent.

Applying Construction 3.8 to an in-star results in a single arc, and when T is a


directed path as in Theorem 3.7, D(T) is simply an out-star. Thus many tree positions
can be reduced to positions on fewer vertices.
It is worth noting, however, that many trees will not reduce to simple positions.
The game of blocking pebbles | 31

In particular, the transitive triple graph, a K3 oriented without a cycle (Fig. 2.3),
has proven very difficult to analyze. However, we present here the set of 𝒫 -positions.

Theorem 3.10. A position in blocking pebbles on a transitive triple with g1 green peb-
bles on the source vertex, g3 on the sink, and g2 on the remaining vertex is a 𝒫 -position
if and only if g2 = g3 .

Proof. Note that, as in all other green-only positions, pebbles on the source vertex are
superfluous. Since any move that increases the total g2 + g3 can be undone, we can
consider these heaps as nim heaps and play accordingly.

We close this section with a very simple result, but one that may prove useful in
future investigations into the game.

Theorem 3.11. A single green pebble on the sink node of a transitive tournament on n
vertices is equivalent to a nim heap of size n.

Proof. We simply consider all options of this position. Since the pebble can only move
back and can move to any previous node, this is equivalent to removing any number
of stones from a nim heap.

4 Blue-red-green blocking pebbles is PSPACE-hard


We next show that it is computationally intractable to determine the outcome class of
a general Red-Green Blocking Pebbles position. More specifically, via a reduction
from Positive CNF, we show that Red-Green Blocking Pebbles is PSPACE-hard.
PSPACE is a class of computational problems that can be solved by a Turing ma-
chine with a polynomial amount of writing space. Problems that are PSPACE-hard are
those where the worst-case instances are at least as difficult to compute as the hard-
est things in PSPACE modulo a polynomial amount. This hardness can be proven by
finding a reduction from an already-known-to-be PSPACE-hard problem to the game
in question.
PSPACE-hard problems are widely considered to be intractable because no known
algorithm exists to solve them in polynomial time, despite this being a significant open
problem for over 40 years. (Indeed, the Millenium Problem P vs NP is more heavily
studied, and NP is a subset of PSPACE.) By proving that a ruleset is PSPACE-hard we
are showing that it is not trivial to calculate winning strategies. This means that actual
competition is interesting; humans stand a chance against computer players, as no
efficient perfect player exists unless P = PSPACE.
We reduce from Positive CNF to show hardness. Positive CNF is a game played
with two players, True and False, with alternating turns, on a list of Boolean vari-
ables and a CNF (Conjunctive Normal Form) formula using those variables and with
no negated literals. In the starting positions, all variables begin unassigned. The
32 | K. Burke et al.

True player, on their turn, sets one unassigned variable to true. On the False player’s
turn, they set one unassigned variable to false. After all variables are set, the True
player wins if the CNF evaluates to true; otherwise, False wins. Positive CNF is
PSPACE-hard1 [16].
To reduce from Positive CNF to Blocking Pebbles, we need to have both a way
for the players to alternate setting variables and a way for the evaluation of the CNF
to determine the winner of the game. We achieve this using three different gadgets.
We have a gadget for the players to set the variables (the Variable Gadget, Fig. 2.6), a
gadget for the False player to select a clause they believe they have falsified (the Clause
Gadget, Fig. 2.7), and a gadget that allows the True player to win if no clause is falsified,
but allows False to win if they do falsify a clause (the Goal Gadget, Fig. 2.8).
We consider the following properties of the formula in the Positive CNF position:
– The formula uses n variables and has m clauses.
– The ith clause contains Li unique literals.

The first gadget to describe is the variable gadget. See Fig. 2.6 for an example. The first
pebble moved onto the vertex labeled xi corresponds to that player choosing the vari-
able xi in Positive CNF. If Red (True) moves there, then all pebbles in the gadget are
unable to move for the remainder of the game. However, if Blue (False) moves there,
then they can later move the pebble down with the other pebble, then down one of
the paths to a single clause vertex (this will also give two moves for Red).
These clause paths are long to prevent players from traversing back “upward”
later with a cache of tokens on the clauses.
Each clause gadget (see Fig. 2.7) includes a vertex Ci connected to the Li unique
literals in that CNF clause via these paths as shown in our variable gadget, and g is
part of the goal gadget (Fig. 2.8). We want to ensure that if and only if Blue moves in
each of the variables in Lj , then they can get exactly one pebble to g.
We enforce this by requiring Blue to accumulate exactly a power of two (2k ) peb-
bles on the clause vertex Cj , so they can push those pebbles down a path of length k
to reach g. If there are Lj literals in clause j, then the path needs to require ⌈log2 (Lj )⌉
moves to reach g, meaning that there need to be ⌈log2 (Lj )⌉ − 1 vertices on the path
between Cj and g. In the many cases where Lj is not a power of 2, we have to start
with extra blue pebbles on the clause vertex to “round it up”; this is just f (Lj ), where
f (n) = 2⌈log2 (n)⌉ − n.
We will give Red (True) a single red pebble on our goal gadget that can traverse
a path of length n ⋅ (5 + ⌈log2 (maxi {Li })⌉) + 2 ∑i f (Li ) − n. If Blue cannot reach g, then
(as we will prove later) Red will win with this path. If Blue can reach g, however, then
they can follow Red down that long path.

1 At the original time of submission, Positive CNF was known to be hard for 11-CNF formulas [16].
Since then, an impressive improvement was discovered, showing that it is hard even on 6-CNF formu-
las [15].
The game of blocking pebbles | 33

1 xi 1

1
⌈log2 (maxi {Li })⌉

.. .. .. .. ..
. . . . .

Ca Cb Cc Cd Ce

Figure 2.6: Example of a variable gadget for variable xi . Here xi is included in clauses Ca , Cb ,
Cc , Cd , and Ce , a subset of all the clauses; Li is the number of unique literals in clause Ci , so
⌈log2 (maxi {Li })⌉ is enough that even if a clause gets all the blue pebbles from the variables, it will
not be enough to go back deep into the variable gadget.

⌈log2 (Lj )⌉ − 1
Cj g

Lj f (Lj ) ⋅⋅⋅

Figure 2.7: Clause Gadget for a single clause Cj with Lj unique literals. f (Lj ) is the number needed to
push Lj to the next power of two, so f (Lj ) = 2⌈log2 (Lj )⌉ − Lj .
34 | K. Burke et al.

n ⋅ (5 + ⌈log2 (maxi {Li })⌉) + 2 ∑i f (Li ) − n


g

1 ∙∙∙

Figure 2.8: Goal Gadget. This gives Red more moves than Blue if Blue cannot reach vertex g. If Blue
is able to reach g, then they can follow Red down the path and win.

If Blue can reach g from one of the clause vertices, they can use the goal gadget to
follow Red’s single pebble down the path. Red will be forced to traverse down this
path before Blue arrives, since Red’s only other pebbles are in the variable gadget,
where they can move in once or twice, depending on if Blue activates the variable or
not. It will take Blue at least two moves for each variable they activate to reach the goal
node, so Red’s pebble on the goal gadget will never get in the way of Blue’s, so long
as Blue activates at least as many variables as Red.

Lemma 4.1. When Blue makes a move on k variables gadgets, then no matter which
player wins the game, the number of moves Red can make is at least n ⋅ (5 +
⌈log2 (maxi {Li })⌉)+2 ∑i f (Li )−n and no more than n⋅(5+⌈log2 (maxi {Li })⌉)+2 ∑i f (Li )+k.

Proof. For the first part, clearly, Red can always make n ⋅ (5 + ⌈log2 (maxi {Li })⌉) +
2 ∑i f (Li ) − n moves by playing on the goal gadget, no matter what else may be hap-
pening in the game.
For the upper bound, Red can only move once on variable gadgets they assigned
and twice on variable gadgets that Blue assigned. This will give them up to 2k + n − k =
n + k moves total from variable gadgets.
In total, Red has

Red moves = n + k (variable gadgets)

+ n ⋅ (5 + ⌈log2 (max{Li })⌉) + 2 ∑ f (Li ) − n (goal gadget)


i
i

= n ⋅ (5 + ⌈log2 (max{Li })⌉) + 2 ∑ f (Li ) + k.


i
i

Theorem 4.2 (Hardness of Blocking Pebbles). Blocking Pebbles is PSPACE-hard.

Proof. We complete the proof by showing that the transformation described results in
a proper reduction. In other words, that Red/True wins going first in the Positive CNF
instance if and only if Red/True wins going first in the resulting Blocking Pebbles
position. For notational simplicity, we will let X = ⌈log2 (maxi {Li })⌉. We will refer to
variable assignment in our reduction: this corresponds to a player moving a pebble of
their color onto the corresponding xi .
The game of blocking pebbles | 35

[⇒] Assume that Red/True wins moving first in the Positive CNF game. Then Red
has a strategy to prevent Blue from falsifying all variables in any one clause. Red may
follow the strategy of the corresponding game of Positive CNF. Whenever Blue does
not make a variable assignment when there are still unassigned variable gadgets re-
maining, Red may assign a variable arbitrarily. Similarly, when Red’s response in the
Positive CNF game has already been made, they may also play arbitrarily. Thus, once
all variables are assigned, for every true assignment in the corresponding Positive
CNF, there is a red pebble moved on the corresponding variable gadget (and perhaps
on other variable gadgets as well). Then, by construction, Blue cannot accumulate
enough Blue pebbles on any Ci vertex of a clause gadget to reach the goal gadget.
On the clause gadgets, without moving any pebble to the goal gadget, Blue may
make several moves from a pile of pebbles by repeatedly dropping one pebble to move
another pebble down an arc, then moving that pebble back to the pile. This strategy
produces 2(n − 1) moves from a pile of n ≥ 1 pebbles. Note that we cannot get any
more moves than this from a clause pile since we are unable to move a pebble to any
sufficiently larger sequence of in-neighbor moves. So we obtain an upper bound on
the number of moves for Blue on the clause gadgets by assuming that all their blue
pebbles are on a single clause vertex:

2(n + ∑ f (Li ) − 1)
i

In total:
Blue moves ≤ n (claiming variables)
+ n ⋅ (X + 2) (moving claimed variables to clauses)
m
+ 2(n + ∑ f (Li ) − 1) (back-and-forth moves in clauses)
i=1
m
= n ⋅ (X + 5) + 2 ∑ f (Li ) − 2
i=1
m
< n ⋅ (X + 5) + 2 ∑ f (Li )
i=1

≤ Red moves. (by Lemma 4.1)

Thus Red will win.


[⇐] We will prove this by contrapositive. Assume that Red/True does not win in
the Positive CNF position. Red may make moves corresponding to assigning variables
in the Positive CNF to be True, or they may deviate from that. If Red makes a move
corresponding to a True assignment, then Blue will respond with the appropriate win-
ning False assignment. If Red makes a move that does not correspond to a True assign-
ment, then Blue can arbitrarily pick a remaining variable, pretend that Red assigned
that to True, and respond with the appropriate winning False assignment. If Red ever
36 | K. Burke et al.

makes True one of the variables Blue has already pretended they claimed, then Blue
will choose yet another remaining variable, pretend Red makes that True, and again
choose the appropriate winning response.
Since Blue has a winning strategy in the Positive CNF position, this will result in
all the variable gadgets for at least one clause being claimed by Blue pebbles.
Thus, in the Blocking Pebbles board resulting from our transformation, Blue has
a strategy to get enough Blue pebbles onto at least one Ci vertex to have a pebble reach
the goal vertex.
Now let us count the number of moves they will have:
Blue moves ≥ ⌊n/2⌋ (claiming variables)
+ ⌊n/2⌋ ⋅ (X + 2) (moving claimed variables to clauses)

+ ⌈log2 (min{Li })⌉ (move from claimed Ci to g)


i

+ n ⋅ (5 + ⌈log2 (max{Li })⌉) + 2 ∑ f (Li ) − n


i
i
(moves on the goal gadget)

≥ n ⋅ (6 + ⌈log2 (max{Li })⌉) + 2 ∑ f (Li ) + ⌊n/2⌋ + ⌊n/2⌋ ⋅ X − n


i
i

> n ⋅ (6 + ⌈log2 (max{Li })⌉) + 2 ∑ f (Li ) + n/2 − n


i
i

= n ⋅ (5 + ⌈log2 (max{Li })⌉) + 2 ∑ f (Li ) + n/2


i
i

≥ Red moves. (by Lemma 4.1)

Thus Blue will have more moves than Red and will win in the Blocking Pebbles
position.

Theorem 4.2 gives us the hardness for the general game on graphs, but, as with
many results like this, it is likely not the final word on the matter for two reasons. First,
it is not clear at this point what graph structure(s) for actual play are. So the range of
this reduction may not line up with real-world competition.
Second, it is simultaneously possible that the game cannot be solved in poly-
nomial space and may be hard for a more difficult complexity class (a superset of
PSPACE, e. g., EXPTIME). One reason for this is that the sizes of the pebble-piles could
be exponential in the size of the position description. Additionally, since the graph
could contain cycles, the game is loopy, which can often lead to EXPTIME-hardness.
Improvements to this result could include:
– Algorithms showing that a player can avoid our constructions from given starting
positions (while maintaining the winnability for a player);
– Reductions to more structured graphs than in the range of our reduction; or
– Reductions from supersets of PSPACE.
The game of blocking pebbles | 37

5 Further directions
There remain many open questions and avenues for further study of blocking peb-
bles. In particular, we would like to resolve the question of game values for all-green
games. As we have mentioned, it has proven difficult to determine these values when
the underlying graph contains cycles.
Through the use of computational software, in particular, CGSuite [17], we have
been able to find positions with many dyadic game values. It remains an open question
whether or not there is a dyadic rational 2ab that is not the value of any position in
blocking pebbles.
With regards to computational complexity, in addition to the potential improve-
ments listed in that section, the computational hardness of Green Blocking Pebbles
remains an open problem.

Bibliography
[1] M. Albert, R. Nowakowski, and D. Wolfe, Lessons in Play: An Introduction to Combinatorial
Game Theory, CRC Press, 2007.
[2] M. Albert, J. Grossman, R. Nowakowski, and D. Wolfe, An introduction to clobber, Integers 5(2),
(2005).
[3] E. Berlekamp, J. Conway, and R. Guy, Winning Ways for Your Mathematical Plays, volumes 1–4,
AK Peters Natick, 2003.
[4] N. Calkin, K. James, J. Janoski, S. Leggett, B. Richards, N. Sitaraman, and S. Thomas, Computing
strategies for graphical Nim, Congr. Numer. 202 (2010), 171–185.
[5] F. Chung, Pebbling in hypercubes, SIAM J. Discrete Math. 2(4) (1989), 467–472.
[6] C. dos Santos and J. Silva, Konane has infinite nim-dimension, Integers 8(1), #G2, 6 pp.
[7] S. Faridi, S. Huntemann, and R. Nowakowski, Games and complexes I: Transformation via
ideals, in Games of No Chance 5, MSRI Publications, vol. 70, p. 285 (2019).
[8] M. Fukuyama, A nim game played on graphs, Theoret. Comput. Sci. 1–3(304) (2003), 387–399.
[9] G. Hurlbert, A survey of graph pebbling, Congr. Numer. 139 (1999), 41–64.
[10] L. Kirousis and C. Papadimitriou, Searching and pebbling, Theoret. Comput. Sci. 47 (1986),
205–218.
[11] Q. Liu, Red-Blue and Standard Pebble Games: Complexity and Applications in the Sequential
and Parallel Models, Master’s thesis, MIT, Cambridge, MA, 2017.
[12] R. Milley and G. Renault, Dead ends in misère play: the misère monoid of canonical numbers,
Discrete Math. 313(20) (2013), 2223–2231.
[13] L. Pachter, H. Snevily, and B. Voxman, On pebbling graphs, Congr. Numer. 107 (1995), 65–80.
[14] M. Prudente, Two-Player Graph Pebbling, PhD thesis, Lehigh University, Bethlehem, PA, 2015.
[15] M. Rahman and T. Watson, 6-uniform maker-breaker game is PSPACE-complete, in 38th
International Symposium on Theoretical Aspects of Computer Science (STACS 2021), Schloss
Dagstuhl-Leibniz-Zentrum für Informatik (2021).
[16] T. Schaefer, On the complexity of some two-person perfect-information games, J. Comput.
System Sci. 16(2) (1978), 185–225.
[17] A. Siegel, Combinatorial Game Suite: An open-source program to aid research in combinatorial
game theory, http://cgsuite.sourceforge.net (2004).
Kyle Burke, Matthew Ferland, and Shang-Hua Teng
Transverse Wave: an impartial
color-propagation game inspired by social
influence and Quantum Nim
Abstract: In this paper, we study Transverse Wave, a colorful, impartial combinato-
rial game played on a two-dimensional grid. We are drawn to this game because of its
apparent simplicity, contrasting intractability, and intrinsic connection to two other
combinatorial games, one about social influences and another inspired by quantum
superpositions. More precisely, we show that Transverse Wave is at the intersection
of two other games, the social-influence-derived Friend Circle and superposition-
based Demi-Quantum Nim. Transverse Wave is also connected with Schaefer’s logic
game Avoid True from the 1970s. In addition to analyzing the mathematical structures
and computational complexity of Transverse Wave, we provide a web-based version
of the game. Furthermore, we formulate a basic network-influence game, called De-
mographic Influence, which simultaneously generalizes Node-Kayles and Demi-
Quantum Nim (which in turn contains Nim, Avoid True, and Transverse Wave as
particular cases). These connections illuminate a lattice order of games, induced by
special-case/generalization relationships, fundamental to both design and compara-
tive analysis of combinatorial games.

1 The game dynamics of Transverse Wave


Elwyn Berlekamp, John Conway, and Richard Guy were known not only for their deep
mathematical discoveries, but also for their elegant minds. Their love of mathematics
and their life-long efforts of making mathematics fun and approachable led to their
masterpiece, Winning Ways for your Mathematical Plays [3], a book that has inspired
many. The field that they pioneered—combinatorial game theory—reflects their per-
sonalities. Popular combinatorial games are usually:
– Approachable: they have easy to remember and understand rules, and
– Elegant: they have attractive game boards, yet
– Deep: they are challenging to play optimally and have intriguing strategies.

Acknowledgement: We thank the anonymous reviewers for helpful suggestions. This research was
supported in part by an NSF grant (CCF-1815254) and by a Simons Investigator Award.

Kyle Burke, (c/o Christian Roberson), Department of Computer Science, Plymouth State University,
Plymouth, New Hampshire, USA, URL: https://turing.plymouth.edu/~kgb1013/
Matthew Ferland, Shang-Hua Teng, Department of Computer Science, University of Southern
California, Los Angeles, California, USA, e-mail: mferland@usc.edu; URL:
https://viterbi-web.usc.edu/~shanghua/

https://doi.org/10.1515/9783110755411-003
40 | K. Burke et al.

1.1 Combinatorial games and computational complexity


The last property has a characterization based on computational complexity. To make
the competition interesting, we want the winnability to be computationally intractable,
meaning that there is no known efficient algorithm to always calculate a position’s
outcome class. One way to argue this is showing that the problem of finding the out-
come class is hard for a common complexity class. Many such combinatorial games
are found to be PSPACE-hard, meaning that finding a polynomial-time algorithm au-
tomatically leads to a polynomial-time solution to all problems in PSPACE.1
On the other hand, for games where the winner can be determined algorithmically
in polynomial time, optimal players can be programmed that will run efficiently. In a
match with one of these players, there is no need to play the game out to determine
whether a winning move exists; just run the program and use that output.
In popular games from Go [21] to Hex [25, 23], determining the winnability be-
comes computationally intractable as the dimensions of the board grow. These sorts
of elegant combinatorial games with easy to follow rulesets and intractable complex-
ity are the gold standard for combinatorial game design [11, 5].
This argument does not entirely settle the debate about whether a ruleset is “in-
teresting”. Indeed, it could be the case that from common starting positions, there is
a strategy for the winning player to avoid the computationally hard positions. Find-
ing computational hardness for general positions in a ruleset is only a minimum re-
quirement. Improvements can be made by finding hard positions that are “naturally”
reachable from the start. The best proofs of hardness yield positions that are starting
positions themselves.2

1.2 A colorful propagation game over 2D grids


In this paper, we consider a simple, colorful, impartial combinatorial game over two-
dimensional grids.3 We call this game Transverse Wave. We became interested in
this game during our study of quantum combinatorial games [5], particularly, in the
complexity-theoretical analysis of a family of games formulated by Nim superposi-
tions. In addition to Schaefer’s logic game, Avoid True, Transverse Wave is also fun-
damentally connected with several social-influence-inspired combinatorial games. As
we will show, Transverse Wave is PSPACE-hard on some possible starting positions.

1 Note that NP is contained in PSPACE.


2 This requires some variance on these starting positions. “Empty” or well-structured initial boards
do not have a large enough descriptive size to be computationally hard in the expected measures.
3 We tested the approachability of this game by explaining its ruleset to a bilingual eight-year-old
second-grade student—in Chinese—and she turned to her historian mother and flawlessly explained
the ruleset in English.
Transverse Wave: an impartial color-propagation game | 41

Ruleset 1 (Transverse Wave). For a pair of integer parameters m, n > 0, a game po-
sition of Transverse Wave is an m by n grid G, in which the cells are colored either
green or purple.4
For the game instance starting at this position, two players take turns selecting a
column of this colorful grid. A column j ∈ [n] is feasible for G if it contains at least one
green cell. The selection of j transforms G into another colorful m by n grid G ⊗ [j] by
recoloring column j and every row with a purple cell at column j with purple.5 In the
normal-play convention the player without a feasible move loses the game.

See Figure 3.1 for an example of a Transverse Wave move.

0 1 2 3 0 1 2 3 0 1 2 3

a b c

Figure 3.1: An example move for Transverse Wave. a: A position from which the first player chooses
column 2. (This is a legal choice, because column 2 has a green cell.) b: Indigo cells denote those
that will become purple. These include the previously green cells in column 2 as well as the green
cells in rows where column 2 had purple cells. c: The new position after all cells are changed to be
purple.

Note that purple cells cannot change to green, and each move changes all green cells
in one column to purple (and possibly some in other columns as well). Thus any po-
sition with dimension m by n must end in at most n turns, and the height of a Trans-
verse Wave game tree is at most n. Consequently, Transverse Wave is solvable in
polynomial space.

4 Or any pair of easily distinguishable colors.


5 Think of purple paint cascading down column j and inducing a purple “transverse wave” whenever
the propagation goes through an already-purple cell.
42 | K. Burke et al.

Note also that the selection of column j could make some other feasible columns
infeasible.6 In Section 4, we will show that the interaction among columns intro-
duces sufficiently rich mathematical structures for Transverse Wave to efficiently
encode any PSPACE-complete game such as Hex [17, 22, 12, 23], Avoid True [25],
Node Kayles [25], Go [21], or Geography [21]. In other words, Transverse Wave is a
PSPACE-complete impartial game.
We have implemented Transverse Wave in HTML/JavaScript.7

1.3 Dual logical interpretations of transverse wave


Transverse Wave uses only two colors; a position can be expressed naturally with a
Boolean matrix. Furthermore, making a move can be neatly captured by basic Boolean
functions. Let us consider the following two combinatorial games over Boolean ma-
trices that are isomorphic to Transverse Wave. Although these logic associations are
straightforward, they set up stimulating connections to combinatorial games inspired
by social influence and quantum superpositions.
We use the following standard notation for matrices. For an m×n matrix A, i ∈ [m]
and j ∈ [n], let A[i, :], A[:, j], A[i, j] denote, respectively, the ith row, jth column, and
the (i, j)th entry in A.

Ruleset 2 (Crosswise AND). For integers m, n > 0, Crosswise AND plays on an m × n


Boolean matrix B.
During the game, two players alternatively select j ∈ [n], where j is feasible for B if
B[:, j] ≠ 0.⃗ The move with selection j then changes the Boolean matrix to the following:
for all i ≠ j ∈ [m], the ith row takes a componentwise AND with its jth bit, B[i, j], then
the (i, j)th entry is set to 0. Under normal play, the player with no feasible column to
choose loses the game.

By mapping purple cells to Boolean 0 (i. e., false) and green cells to Boolean 1
(i. e., true) we see the following proposition.

Proposition 1. Transverse Wave and Crosswise AND are isomorphic games.

Ruleset 3 (Crosswise OR). For integer parameters m, n > 0, Crosswise OR plays on


an m × n Boolean matrix B.
During the game, two players alternatively select j ∈ [n], where j is feasible for B if
B[:, j] ≠ 1.⃗ The move with selection j then changes the Boolean matrix to the following:
for all i ≠ j ∈ [m], the ith row takes a componentwise OR with its jth bit, B[i, j], then

6 This perpendicular propagation gives rise to the name “Transverse Wave”.


7 Web version: https://turing.plymouth.edu/~kgb1013/DB/combGames/transverseWave.html
Transverse Wave: an impartial color-propagation game | 43

the (i, j)th entry is set to 1. Under normal play, the player with no feasible column to
choose loses the game.

By mapping purple cells to Boolean 1 and green cells to Boolean 0 we see another
isomorphism.

Proposition 2. Transverse Wave and Crosswise OR are isomorphic games.

1.4 Transverse Wave game values


As we will discuss later, Transverse Wave is PSPACE-complete. So we have no hope
for an efficient complete characterization for the game values. In this subsection, we
show that we can fully characterize the Grundy values (i. e., nimbers) for the specific
case of Transverse Wave, where each column has either 0 or 1 purple tiles. Because
we are able to classify the moves into two types of options and by extension can de-
fine the game by two parameters, a fun and interesting Pascal-like triangle of nimbers
arises.
Before stating the theorem, we will define some terms. These are named based on
starting positions, but also relate to any position resulting during the play from one
of these positions. We call rows/columns that contain only purple titles all-purple. We
will call the rows with an odd number of purple tiles the odd parity rows (not count-
ing tiles in all-purple columns) and the ones with an even number of purple tiles the
even parity rows (again, not counting tiles in all-purple columns). We will also refer to
columns that contain only green tiles (except for all-purple rows) as all-green. These
terms make more sense in the case where there are no all-purple columns or all-purple
rows, but we note that we can remove those rows and columns, resulting in an isomor-
phic game position.

Theorem 1 (Pascal-Like Nimber Triangle). Let p be the number of rows that are not all-
purple, let k be the number of rows with odd parity, and let q = 0 if there are an even
number of columns with only green tiles and 1 otherwise. We define G′ to be

{
{ 0 (k is even, and p > 2k) or (k is odd, and p < 2k),
′ {
G = {∗ (k is even, and p < 2k) or (k is odd, and p > 2k),
{
{
{∗2, p = 2k.

If G is a Transverse Wave position where for every column we can select, there is no
more than one purple tile (discounting rows with only purple tiles), then G = G′ + ∗q.

We include example applications of Theorem 1 in Figure 3.2.

Proof. Note that all-green columns cannot be fully colored purple by a move from an-
other column, as that other column would either have to be all-purple (so it can not
44 | K. Burke et al.

0 1 2 3 0 1 2 3

a b

Figure 3.2: Two Transverse Wave positions where we can apply Theorem 1. In (a), p = 3, k = 3, and
q = 1, so G = G′ + ∗q = ∗ + ∗ = 0. In (b), there is one all-purple row and one all-purple column, which
we ignore/remove. Then p = 2, k = 1, and q = 0, so G = G′ + ∗q = ∗2 + 0 = ∗2.

be chosen) or it would have to have green cells in a row where the all-green column
is purple (then the all-green column is not all-green). This means that each all-green
column contributes an additive ∗ to the game; it can be replaced with a single inde-
pendent move.
We claim that G′ is the game value without including the all-green columns, and G
is the game value with them.
In the beginning, each column has at most one purple cell. Thus the difference
between the number of purple cells in each column is at most one. After each play,
either only the cells in the chosen column become purple, or it will additionally turn
one extra entire row purple.
Assuming that G′ is as we claim, it is not difficult to see that G is correct. As previ-
ously noted, each all-green column is just a ∗, so G = G′ if there are an even number
of all-green columns, and G = G′ + ∗ if there are an odd number of them.
We have an illustrative triangle of cases of G′ of up to eight rows in Figure 3.3.
If the player chooses a column where the purple tile is in an odd parity row, then an
even number (2t) of other (playable) columns also have a single purple cell in that row;
all of those columns become all-green. As mentioned above, each of these columns
additively contributes ∗ to the value, for a total of 2t×∗ = 0. Thus the resulting option’s
value is just the same as one with p − 1 rows and k − 1 rows with odd parity. This is just
the value above and left in the triangle previously referenced.
If the player instead chooses a column where the purple tile is in a row with even
parity, then an odd number (2t +1) of other rows also have that single purple cell. Thus
the result is (2t + 1) × ∗ = ∗ added to the option. Thus the resulting game is the value
Transverse Wave: an impartial color-propagation game | 45

# odds
0
1
# rows 2
1 0 0 3
2 0 ∗2 ∗ 4
3 0 ∗ ∗ 0 5
4 0 ∗ ∗2 0 ∗ 6
5 0 ∗ 0 0 ∗ 0 7
6 0 ∗ 0 ∗2 ∗ 0 ∗ 8
7 0 ∗ 0 ∗ ∗ 0 ∗ 0
8 0 ∗ 0 ∗ ∗2 0 ∗ 0 ∗

Figure 3.3: A Pascal-like Nimber Triangle of the values of Transverse Wave positions, where each
column has no more than 1 purple tile. The levels of the triangle are the number of rows in the game
board (with at least one purple tile), and the diagonal columns marked indicate the number of odd
parity rows. Each entry can be determined from the two above by taking the mex of the above-left
entry and ∗+ of the above-right entry.

above and right in the triangle (the same number of odd rows and one less row overall)
plus ∗.
By inspection note that Table 3.1 represents the correct value for each possible pair
of parents in the game tree.
Now we have five cases, which invoke those table cases.
1. k even and p > 2k: case a, b, h, or j (thus the value is 0)

Table 3.1: Values for a game with a given position based on what is above and left in the triangle and
what is above and right. The position has value based on options to the above right plus ∗ and to the
above left.

Case Above left Above right Value

a 0
b 0 0
c 0 ∗
d 0 0 ∗2
e 0 ∗ ∗
f 0 ∗2 ∗
g ∗ 0
h ∗ 0 0
i ∗ ∗ ∗2
j ∗ ∗2 0
k ∗2 0 0
l ∗2 ∗ ∗
46 | K. Burke et al.

2. k odd and p < 2k: case a, g, h, or k (the value is 0)


3. k even and p < 2k: case c, e, or l (the value is ∗)
4. k odd and p > 2k: case e or f (the value is ∗)
5. p = 2k: case d or i (the value is ∗2)

We prove the correctness of these cases by induction on the number of rows. Our base
case, a single row, holds by inspection.
For our inductive case, we assume that it holds for row p − 1.
We first show that it holds when k is even and p > 2k (case 1 in our list). This is
either the base case (case a), or there are 0 odd rows (case b), or k is some other even
number less than 21 p. In this last case the left parent will have k − 1 with p − 1 > 2(k − 1),
which, by induction, must be ∗. Then the right parent has p − 1 ≥ 2k and thus is either
0 or ∗2. Thus it must be either case h or j.
Now we examine case 2 in our list, where k is odd and p < 2k. This is either the
base case (case a), or k = p (which must be case g, since it only has a single left parent
with even k), or it is some other odd k such that p < 2k. Then it has a right parent with
odd k and p < 2k, which is inductively 0. The left parents have even k and p−1 ≤ 2(k−1),
thus a left parent is either ∗ or ∗2, which are cases h and k, respectively.
For case 3 in our list, if k is even and p < 2k, then either k = p, in which case it has
a single left parent in case 2, which is case c, or it is some other even k with p < 2k. In
that case the right parent will have the same k, be in case 3, and have value ∗. The left
parent will have k − 1 and n − 1 ≤ 2(k − 1), and thus be 0 (case e) or ∗2 (case l).
For case 4, if k is odd and p − 1 > 2(k − 1), then we know that the left parent has
p > 2(k − 1), which is 0 (since it is case 1). The right parent has p − 1 ≥ 2k and is thus 0
(case e) or ∗2 (case f).
Finally, in case 5, if p = 2k, then k can be either odd or even. If it is odd, then the
left parent has even k and p − 1 > 2(k − 1) and is thus 0, and the right parent has even k
and p < 2k and is thus 0, putting this in case d. If k is even, then the left parent is odd
and thus ∗, and the right parent is even and thus ∗, putting us in case 1.

For more general Transverse Wave positions, we provide examples with values
up to ∗7, as shown in Table 3.2.
These results can be extended to the related rulesets that are exact embeddings
of this, e. g., Avoid True and Demi-Quantum Boolean Nim. We discuss these trans-
formations in more detail later.
Although we can only characterize Transverse Wave in very special cases, the
Pascal-like formation of these game values provides us a glimpse of a potential elegant
structure. Something interesting to explore in the future is whether we can cleanly
characterize the game values when the game is restricted to have only two purple tiles
in each column. If so, then we would also like to see how large of the parameter we
can characterize to encounter intractability. Answering these questions can also tell
us about the values for the other related games.
Transverse Wave: an impartial color-propagation game | 47

Table 3.2: Instances of values up to ∗7. The shorthand uses parentheses to indicate rows and num-
bers to indicate purple columns. So, for example, in the ∗3 case the first row has columns 0 and 1
colored purple, row 2 has column 2 colored purple, and column 3 has no purple cells.

Nimber Rows (shorthand) Other Columns

0 (0)
∗ (0) (01)
∗2 (01) (2)
∗3 (01) (2) 3
∗4 (01) (234) (035)
∗5 (01) (234) (035) 6
∗6 (012) (034) (0156) (2578)
∗7 (012) (034) (0156) (2578) 9

2 Connection to a social-influence-inspired
combinatorial game
Because mathematical principles are ubiquitous, combinatorial game theory is a field
intersecting many disciplines. Combinatorial games have drawn wide inspiration
from logic [25] to topology [22], from military combat [13, 21] to social sciences [8], and
from graph theory [25, 2] to game theory [7]. Because the field cherishes challenging
games with simple rulesets and elegant game boards, combinatorial game design
is also a distillation process, aiming to derive elementary moves and transitions to
capture the essence of complex phenomena that inspire the designers.
In this and the next sections, we discuss two other rulesets whose intersection
contains Transverse Wave. Although they have found a common ground, these
games were inspired by separate research fields. The first game, called Friend Circle,
is motivated by viral marketing [24, 20, 9], whereas the second, called Demi-Quantum
Nim, was inspired by quantum superpositions [18, 10, 5]. In this section, we first focus
on Friend Circle.

2.1 Viral-marketing broadcasting: friend circle


In many ways, viral marketing itself is a game of financial optimization. It aims to
convert more people through network-propagation-based social influence by strate-
gically investing in a seed group of people [24, 20]. In the following combinatorial
game inspired by viral marketing, Friend Circle, we use a high-level perspective of
social influence and social networks. Consider a social-network universe like Face-
book, where people have their “circles” of friends. They can broadcast to all people in
their friend circles (with a single post), or they can individually interact with some of
their friends (via various personalized means). We will use individual interaction to set
48 | K. Burke et al.

v1 v1
t f t t

v2 v3 v2 v3

f f t f

v4 v5 v4 v5
f f

Figure 3.4: Example of a Friend Circle move. In the position on the left, let the seed set S =
{v1 , v2 , v3 , v4 }, all of which are acceptable to choose because they all have an incident false edge.
If a player chooses v2 , then the result is the right-hand position. In the second position, v2 has had
all of its incident edges become true. In addition, since (v1 , v2 ) was true, all incident edges of v1 have
also changed to true. The altered edges in the figure are represented in bold. Note that in the result-
ing position, the next player can only choose to play at either v3 or v4 , as v1 and v2 have only true
edges and v5 ∉ S.

up the game position. In Friend Circle, only the broadcast-type of interaction is ex-
ploited. We will use the following traditional graph-theory notation: In an undirected
graph G = (V, E), for each v ∈ V, the neighborhood of v in G is NG (v) = {u | (u, v) ∈ E}.

Ruleset 4 (Friend Circle). For a ground set V = [n] (of n people), a Friend Circle
position is defined by a triple (G, S, w) where the properties are as follows.
– G = (V, E) is an undirected graph. An edge between two vertices represents a
friendship between those people.
– S ⊂ V denotes the seed set.
– w : E → {f, t} (false and true) represents whether those friends have already spo-
ken about the target product (with at least one recommending it to the other).

To choose their move, a player (a viral marketing agent) picks a person from the seed
set v ∈ S such that there exists e = (v, x) ∈ E where w(e) = f. This represents choosing
someone who has not spoken about the product to at least one of their friends.
The result of the move choosing v is a new position (G, S, w′ ), where w′ is the same
as w except that for all x ∈ NG (v):
– w′ ((v, x)) = t, and
– if w((v, x)) = t, then w′ ((x, y)) = t for all y ∈ NG (x).

An example of a Friend Circle move is shown in Figure 3.4.


By inducing people in the seed’s friend circle who had an existing intersection
with the chosen seed to broadcast, Friend Circle emulates an elementary two-step
cascading network of social influence.
Transverse Wave: an impartial color-propagation game | 49

2.2 Intractability of Friend Circle


We first connect Friend Circle to the classic graph-theory game Node-Kayles.

Ruleset 5 (Node-Kayles). The starting position of Node-Kayles is an undirected


graph G = (V, E).
During the game, two players alternate turns selecting vertices, where a vertex
v ∈ V is feasible if it has neither already been selected in the previous turns nor any of
its neighbors has already been selected. The player who has no more feasible moves
loses the game.

When the selected vertices form a maximal independent set of G, the next player
cannot make a move and hence loses the game. It is well known that Node-Kayles is
PSPACE-complete [25].

Theorem 2 (Friend Circle is PSPACE-complete). The problem of determining whether


a Friend Circle position is winnable is PSPACE-complete.

Proof. First, we show that Friend Circle is PSPACE-solvable. During a game of


Friend Circle starting at (G, S, w), once a node s ∈ S is selected by one of the players,
all edges incident to s become t. Since true edges can never later become f, s can never
again be chosen for a move, and the height of the game tree is at most |S|. Then by the
standard depth-first-search (DFS) procedure for evaluating the game tree for (G, S, w)
in Friend Circle, we can determine the outcome class in polynomial space.
To establish that Friend Circle is PSPACE-hard, we reduce Node-Kayles to
Friend Circle.
Suppose we have a Node-Kayles instance at graph G0 = (V0 , E0 ). For the reduced
Friend Circle position, we create a new graph G = (V, E) as follows. First, for each
v ∈ V0 , we introduce a new vertex tv . Let T0 = {tv |v ∈ V0 }, so V = V0 ∪ T0 . In addition,
let E1 = {(v, tv ) | v ∈ V0 }, so E = E0 ∪ E1 . Next, we set the weights:
– w(e) = t for all e ∈ E0 , and
– w(e) = f for all e ∈ E1 ,

as shown in Figure 3.5. Last, we set S = V0 .


We now prove that Friend Circle is winnable on (G, S, w) if and only if Node
Kayles is winnable at G0 . Note that because w(v, tv ) = f for all v ∈ V0 , all vertices in V0
are feasible choices for the current player in Friend Circle. As the game progresses,
vertices in V0 are no longer able to be chosen when their edge to the t vertex becomes
true. From here the argument is very simple: each play on vertex v ∈ V0 in Node
Kayles corresponds exactly to the play on v in Friend Circle. In Node Kayles, when
v is chosen, itself and all its neighbors, NG0 (v) are removed from future consideration.
In Friend Circle, v is also removed because the edge (v, tv ) becomes t. In addition,
since all neighboring vertices x ∈ NG0 (v) share a t-edge with v, their edge with tx will
also become t, but no other vertices will be removed as future choices.
50 | K. Burke et al.

v t t
t
v tv
t f
t
v and NG0 (v)

Figure 3.5: Example of the reduction from Node Kayles to Friend Circle. On the left, there is a Node
Kayles vertex and its neighborhood. On the right, there are those same vertices, along with tv with
t-weights on all the old edges and f on the new edge with (v, tv ).

2.3 Transverse Wave in Friend Circle


We now show that Friend Circle contains Transverse Wave as a special case. In
the proposition below and the rest of the paper, we say that two game instances are
isomorphic to each other if there exists a bijection between their moves such that their
game trees are isomorphic under this bijection.

Proposition 3 (Social-Influence Connection of Transverse Wave). For any complete


bipartite graph G = (V1 , V2 , E) over two disjoint ground sets V1 and V2 (i. e., with
E = V1 × V2 ), any weighting w : E → {f, t}, and seeds S = V1 , Friend Circle posi-
tion (G, S, w) is isomorphic to Crosswise OR over a pseudo-adjacency matrix AG for
G with V1 as columns and V2 as rows. In this matrix, we will have the entry at column
x ∈ V1 and row y ∈ V2 be 0 if w((x, y)) = f and 1 if the weight is t.

Note that by varying w : E → {f, t} we can realize any |V1 | × |V2 | Boolean matrix
with AG . Thus Friend Circle generalizes Transverse Wave.

Proof. Imagine these two games are played in tandem. We map the selection of a ver-
tex v ∈ S = V1 to the selection of the column associated with v in the matrix AG of G.
Because G is a complete bipartite graph, v is feasible for Friend Circle if there exists
u ∈ V2 such that w(u, v) = f. Thus the column associated with v in AG is not all 1s.
This is precisely the condition for v to be feasible in Crosswise OR over AG . The Direct
Influence at v in Friend Circle over G changes all edges of v to t and the subsequent
Cascading Influence on initially t neighbors of v in V2 is isomorphic to crosswise ORs.
Thus Friend Circle on G is isomorphic to Crosswise OR over AG .

One can see an example of this isomorphism in Figure 3.6


Transverse Wave: an impartial color-propagation game | 51

V1 V2

2 a
1 2 3 4 5 6

3 b a

4 c b

5 d c

d
6

Figure 3.6: On the left, there is a Friend Circle position on the complete bipartite graph between V1
and V2 , where the seed set S = V1 . Instead of labeling edges, we have removed all false edges and
include only true edges. On the right, there is the equivalent Transverse Wave position. The purple
cells correspond to the (true) edges in the bipartite graph.

3 Connection to quantum combinatorial game theory


In this section, we discuss the connection of Transverse Wave to a basic quantum-
inspired combinatorial game.
Quantum computing is inspirational not only because the advances of quantum
technologies have the potential to drastically change the landscape of computing and
digital security, but also because the quantum framework—powered by superposi-
tions, entanglements, and collapses—has fascinating mathematical structures and
properties. Not surprisingly, quantumness has already found a way to enrich combina-
torial game theory. In the early 2000s, Allan Goff introduced basic quantum elements
into Tic-Tac-Toe, as a conceptual illustration of quantum physics [18]. The quantum-
generalization of Tic-Tac-Toe expands the strategy space by allowing superpositions
of classical moves, creating game boards with entangled components. Consistency-
based conditions for collapsing can then reduce the degree of possible realizations
in the potential parallel game scenarios. In 2017, Dorbec and Mhalla [10] presented a
general framework, motivated by Goff’s concrete adventures, for a quantum-inspired
extension applicable to all classical combinatorial games. Their framework enabled
our recent work [5] on the structures and complexity of quantum-inspired games,
which also led to the creation of Transverse Wave and Friend Circle in this pa-
per.
52 | K. Burke et al.

3.1 Superposition of moves and game realizations


In this subsection, we briefly discuss quantum combinatorial game theory (QCGT) to
introduce needed concepts and notations for this paper. More detailed discussions of
QCGT can be found in [18, 10, 5].
– Quantum Moves: A quantum move is a superposition of two or more distinct clas-
sical moves. The superposition of w classical moves σ1 , . . . , σw —called a w-wide
quantum move—is denoted by ⟨σ1 | ⋅ ⋅ ⋅ | σw ⟩.
– Quantum Game Position: A quantum position is a superposition of two or more
distinct classical game positions. The superposition of s classical positions
b1 , . . . , bs —called an s-wide quantum superposition—is denoted by 𝔹 = ⟨b1 |
⋅ ⋅ ⋅ | bs ⟩. We call b1 , . . . , bs the realizations of 𝔹. We sometimes refer to classical
moves and positions as 1-wide superpositions.

Classical/quantum moves can be applied to classical/quantum positions. Variants of


the Dorbec–Mhalla framework differ in the condition when classical moves are al-
lowed to engage with quantum positions. In this paper, we focus on the least restric-
tive flavor—referred to by variant D in [10, 5]—in which moves, classical or quantum,
are allowed to interact with game positions, classical or quantum, provided that they
are feasible. There are some subtle differences between these variants, and we direct
interested readers to [10, 5]. In this least restrictive flavor, a superposition of moves
(including 1-wide superpositions) is feasible for a quantum position (including 1-wide
superpositions) if each move is feasible for some realization in the quantum position.
A superposition of moves and a quantum position of realizations create a “tensor” of
classical interactions in which infeasible classical interactions introduce collapses in
realizations.
Quantum moves can have an impact on the outcome class of games, even on clas-
sical positions. In Figure 3.7, we borrow an illustration from [5] showing that quantum
moves can change the outcome class of a basic Nim position. (2, 2) becomes a fuzzy (𝒩 ,
a first-player win) position instead of a zero (𝒫 , a second-player win) position. Quan-
tumness matters for many combinatorial games, as investigated in [5].

Ruleset 6 (Nim). A Nim position is described by a nonnegative integer vector, e. g.,


(3, 5, 7) = G, representing heaps of objects (here pebbles). A turn consists of removing
pebbles from exactly one of those heaps. We describe these moves as a nonpositive
vector with exactly one nonzero element, e. g., (0, −2, 0). Each move cannot remove
more pebbles from a heap than already exist there. Thus the move (−7, 0, 0) is not a
legal move from G, above. When all heaps are zero, there are no legal moves.

To readers familiar with combinatorial game theory, it may seem odd that we ex-
plicitly define the description of moves in the game. However, it is integral for play-
ing quantum combinatorial games, as move description affects quantum collapse. For
more information, see [5].
Transverse Wave: an impartial color-propagation game | 53

(2, 2)
𝒩
⟨ (−1, 0)
| (0, −1) ⟩
⟨ (1, 2)
⟨ (−1, 0) | (2, 1) ⟩ ⟨ (−1, 0)
| (0, −1) ⟩
| (0, −2) ⟩
(−1, 0) 𝒫 ⟨ (−1, 0)
| (−2, 0) ⟩
(−2, 0)
⟨ (0, 2) ⟨ (0, 2) ⟨ (0, 2)
⟨ (0, 2)
| (1, 0) (0, 1) | (1, 1) | (1, 1)
| (1, 1) ⟩
| (1, 1) ⟩ 𝒩 | (0, 1) ⟩ | (2, 0) ⟩
𝒩
𝒩 𝒩 𝒩
(0, −1)
(0, −2) (0, −2)
(0, −2) (0, 0) (−2, 0)
𝒫

Figure 3.7: Illustration from [5]: Winning strategy for Next player in Quantum Nim (2, 2), showing
that quantum moves impact the outcome of the game. Only one option from (2, 2) is shown because
it is a winning move. (There are four additional move options from ⟨(1, 2) | (2, 1)⟩ that are not shown
because they are symmetric to moves given.)

Quantum interactions between moves and positions, as demonstrated in [18, 10], can
have a significant impact on Nimber Arithmetic. In addition, as shown in [5], quantum
moves can also fundamentally impact the complexity of combinatorial games.

3.2 Demi-quantum Nim: superposition of Nim positions


The combinatorial game that contains Transverse Wave as a particular case is de-
rived from Nim [4, 16] in a framework motivated by a practical implementation of
quantum combinatorial games [5].
For integer s > 1, an s-wide quantum Nim position of n heaps can be specified by
an s × n integer matrix, where each row defines a single Nim realization. For example,
the 4-wide quantum Nim position with 6 piles,

⟨(5, 3, 0, 4, 2, 2) | (1, 3, 3, 2, 1, 0) | (0, 0, 4, 6, 5, 7) | (4, 2, 5, 0, 1, 2)⟩,


54 | K. Burke et al.

can be expressed in the following matrix form:

5 3 0 4 2 2
1 3 3 2 1 0
( ). (3.1)
0 0 4 6 5 7
4 2 5 0 1 2

Like the quantum generalization of combinatorial games, this demi-quantum gen-


eralization systematically extends any combinatorial game by expanding its game po-
sitions [5]. The intuitive difference here is that players may not introduce new quan-
tum moves; they may only make classical moves, which apply to all (and may collapse
some) of the realizations in the current superposition.

Definition 1 (Demi-Quantum Generalization of Combinatorial Games). For any game


ruleset R, the demi-quantum generalization of R, denoted by Demi-Quantum-R, is a
combinatorial game defined by the interaction of classical moves of R with quantum
positions in R.
Central to the demi-quantum transition is the rule for collapses. Given a quantum
superposition 𝔹 and a classical move σ of R, σ is feasible if it is feasible for at least
one realization in 𝔹, and σ collapses all realizations in 𝔹 for which σ is infeasible,
meanwhile transforming each of the other realizations according to ruleset R.

For example, the move (0, 0, 0, 0, −2, 0) applied to the quantum Nim position in
equation (3.1) collapses realizations 2 and 4, and transforms realizations 1 and 3, ac-
cording to Nim as in the following matrices:

5 3 0 4 {2 − 2} 2
⊠ ⊠ ⊠ ⊠ ⊠ ⊠ 5 3 0 4 0 2
( )=( ).
0 0 4 6 {5 − 2} 7 0 0 4 6 3 7
⊠ ⊠ ⊠ ⊠ ⊠ ⊠

Note that for any impartial ruleset R, Demi-Quantum-R remains impartial. We


now show that Demi-Quantum-Nim contains Crosswise AND and thus Transverse
Wave as a particular case.

Proposition 4 (QCGT Connection of Transverse Wave). Let Boolean Nim denote Nim
in which each heap has either one or zero pebbles. Demi-Quantum Boolean Nim is
isomorphic to Crosswise AND and hence isomorphic to Transverse Wave.

Proof. The proof uses the following equivalent “numerical interpretation” of collapses
in the (demi-)quantum generalization of Nim. When a realization collapses, we can ei-
ther remove it from the superposition or replace it with a Nim position in which all piles
have zero pebbles. For example, the following two Nim superpositions are equivalent
Transverse Wave: an impartial color-propagation game | 55

for subsequent game dynamics:

5 3 0 4 0 2 5 3 0 4 0 2
⊠ ⊠ ⊠ ⊠ ⊠ ⊠ 0 0 0 0 0 0
( )≡( ).
0 0 4 6 3 7 0 0 4 6 3 7
⊠ ⊠ ⊠ ⊠ ⊠ ⊠ 0 0 0 0 0 0

In Boolean Nim, each move can only remove one pebble from a pile. So we
can simplify the specification of the move by the index i alone. Note also that each
quantum Boolean Nim position can be specified by a Boolean matrix. Let B denote
the Boolean matrix of Demi-Quantum-Boolean-Nim under consideration. With the
above numerical interpretation of collapses in demi-quantum generalization, the col-
lapse of a realization of B when applying a move i corresponding to the case that the
corresponding row in B has 0 at the ith entry, and the row is replaced by the crosswise
AND with that column selection. Thus Demi-Quantum Boolean Nim with position B
is isomorphic to Crosswise AND with position B.

As an aside, notice that positions with all green tiles are trivial, so they are not ap-
propriate starting positions. Interesting games need to be primed with some arbitrary
purple tiles. Thus the hard positions given by the reduction can be natural starting po-
sitions, and the hardness statement is particularly meaningful for Transverse Wave.

4 The graph structures underlying demi-quantum


Nim
As the basis of Nimbers and Sprague-Grundy theory [3, 26, 19], Nim holds a unique
place in combinatorial game theory. It is also among the few nontrivial-looking games
with a polynomial-time solution. Over the past few decades, multiple efforts have been
made to introduce graph-theoretical elements into the game of Nim [15, 27, 6]. In 2001,
Fukuyama [15] introduced an edge-version of Graph Nim with Nim piles placed on
edges of undirected graphs. Stockman [27] analyzed several versions with piles on the
nodes. Both use the graph structure to capture the locality of the piles players can
take from. Burke and George [6] then formulated a version called Neighboring Nim,
for which classical Nim corresponds to Neighboring Nim over the complete graph,
where each vertex hosts a pile.
The graph structures have a profound impact on the game of Nim both mathemat-
ically [15, 27] and computationally [6]. By a reduction from Geography, Burke and
George proved that Neighboring Nim on some graphs is PSPACE-hard, whereas on
others (such as complete graphs), it is polynomial-time solvable [6]. However, Neigh-
boring Boolean Nim, where each pile has at most one pebble, is equivalent to Undi-
rected Geography and thus can be solved in polynomial time [14].
56 | K. Burke et al.

In contrast, Demi-Quantum Boolean Nim is intractable.

Theorem 3 (Intractability of Demi-Quantum Boolean Nim). Demi-Quantum Boole-


an Nim and hence Transverse Wave (Crosswise AND; Crosswise OR) are PSPACE-
complete games.

4.1 The logic and graph structures of demi-Quantum Boolean


Nim
The intractability follows from the next theorem, which connects Demi-Quantum
Boolean Nim to Schaefer’s elegant PSPACE-complete game Avoid True [25]. The
reduction also reveals the bipartite and hypergraph structures of Demi-Quantum
Boolean Nim.

Ruleset 7 (Avoid True). A game position of Avoid True is defined by a positive CNF F
(and of a set of or-clauses of only positive variables) over a ground set V and a subset
T ⊂ V, the “true” variables (which is usually the empty set at the beginning of the
game).
A turn consists of selecting one variable from V \ T, where a variable x ∈ V \ T is
feasible for position (F, V, T) if assigning all variables in T ∪ {x} to true does not make
F true. If x is feasible, then the position resulting from that move is (F, V, T ∪ {x}).
Under normal play, the next player loses if the position has no feasible move.

Theorem 4 ([5]). Demi-Quantum Boolean Nim and Avoid True are isomorphic games.

Proof. The part of the proof in [5] showing that Quantum Nim is Σ2 -hard also estab-
lishes the above theorem. Because establishing this theorem is not the main focus of
[5], we reformulate the proof here to make this theorem more explicit and to provide
a complete background of our discussion in this section.
We first establish the direction from Demi-Quantum Boolean Nim to Avoid True.
Given a position 𝔹 in Demi-Quantum Boolean Nim, we can create an or-clause from
each realization in 𝔹. Suppose 𝔹 has m realizations and n piles. We introduce n
Boolean variables V = {x1 , . . . , xn }. For each realization in 𝔹, the or-clause consists
of all variables corresponding to piles with zero pebbles. The reduced CNF F𝔹 is the
and of all these or-clauses. Taking a pebble from a pile collapses a realization for
which the pile has no pebble. Such a move is mapped to selecting the corresponding
Boolean variable making the or-clause associated with the realization true. Thus
playing Demi-Quantum Boolean Nim at position 𝔹 is isomorphic to playing Avoid
True starting at position (F𝔹 , V, 0). Note that the reduction can be set up in polynomial
time.
Transverse Wave: an impartial color-propagation game | 57

For example, consider the following Demi-Quantum Boolean Nim position (with
heaps (columns) labeled by their indices and realizations (rows) labeled A, B, and C):

1 2 3 4 5 6 7
1 0 0 1 1 0 1 A
( )
0 1 0 1 1 1 0 B
1 0 0 1 1 1 0 C

This reduces to the Avoid True position with formula

(x2 ∨ x3 ∨ x6 ) ∧ ⏟⏟⏟⏟⏟⏟⏟⏟⏟⏟⏟⏟⏟⏟⏟⏟⏟⏟⏟⏟⏟⏟⏟⏟⏟
⏟⏟⏟⏟⏟⏟⏟⏟⏟⏟⏟⏟⏟⏟⏟⏟⏟⏟⏟⏟⏟⏟⏟⏟⏟ (x1 ∨ x3 ∨ x7 ) ∧ (x 2 ∨ x3 ∨ x7 )
⏟⏟⏟⏟⏟⏟⏟⏟⏟⏟⏟⏟⏟⏟⏟⏟⏟⏟⏟⏟⏟⏟⏟⏟⏟
A B C

and T = 0. The three clauses are labeled by their respective realization. Those variables
that appear in each clause are those with a zero in the matrix. Notice the following
properties.
– The third heap is empty in all Nim realizations, so no player can legally play there.
That is the same in the resulting Avoid True position; no player can pick x3 as it
is in all clauses and would make the formula true.
– The fourth and fifth heaps have a pebble in all three realizations in Nim, so a player
can play in either of them without any collapses. Because of this, those Boolean
variables do not occur in any of the Avoid True clauses.

For the reverse direction, consider an Avoid True position (F, V, T). Assume that V =
{x1 , . . . , xn } and F has m clauses C1 , . . . , Cm . We reduce it to a Boolean Nim superpo-
sition 𝔹(F,V,T) with m realizations and n piles. In the realization for Ci , we set piles
corresponding to variables in Ci to zero to set up the mapping between collapsing the
realization with making the clause true. We also set all piles associated with variables
in T to zero to set up the mapping between collapsing the realization with selecting a
selected variable. Again, we can use these two mappings to inductively establish that
the game tree for Demi-Quantum Boolean Nim at 𝔹(F,V,T) is isomorphic to the game
tree for Avoid True at (F, V, T). Note that the reduction also runs in polynomial time.
We demonstrate this reduction on the following Avoid True position with formula

(x1 ∨ x2 ∨ x3 ∨ x4 ) ∧ ⏟⏟⏟⏟⏟⏟⏟⏟⏟⏟⏟⏟⏟⏟⏟⏟⏟⏟⏟⏟⏟⏟⏟⏟⏟⏟⏟⏟⏟⏟⏟⏟⏟
⏟⏟⏟⏟⏟⏟⏟⏟⏟⏟⏟⏟⏟⏟⏟⏟⏟⏟⏟⏟⏟⏟⏟⏟⏟⏟⏟⏟⏟⏟⏟⏟⏟ (x1 ∨ x5 ∨ x6 ∨ x7 ) ∧ (x 1 ∨ x3 ∨ x6 ) ∧ (x
⏟⏟⏟⏟⏟⏟⏟⏟⏟⏟⏟⏟⏟⏟⏟⏟⏟⏟⏟⏟⏟⏟⏟⏟⏟ 2 ∨ x5 ∨ x8 )
⏟⏟⏟⏟⏟⏟⏟⏟⏟⏟⏟⏟⏟⏟⏟⏟⏟⏟⏟⏟⏟⏟⏟⏟⏟
A B C D

and already-chosen variables T = {x8 }. Following the reduction, we produce the fol-
lowing Demi-Quantum Boolean Nim position:

1 2 3 4 5 6 7 8
0 0 0 0 1 1 1 0 A
( ).
0 1 1 1 0 0 0 0 B
0 1 0 1 1 0 1 0 C
58 | K. Burke et al.

Since x8 has already been made true (x8 ∈ T), the following properties hold.
– The eighth column is all zeroes.
– Since x8 appears in clause D, that clause does not have a corresponding realization
in the quantum superposition (i. e., a row in the matrix).

Theorem 4 presents the following bipartite-graph interpretation of Demi-Quan-


tum Boolean Nim.

Ruleset 8 (Power Station). The game is defined by a bipartite graph G = (U, V, E)


(with edges E between vertex sets U and V) and a token on a vertex s ∈ U, where there
is a battery at each vertex in U\{s}, and s has 0 batteries. We say that a vertex (“station”)
u ∈ U is reachable from s if s and u can be connected by a length-two path in G, i. e.,
there exists v ∈ V (a “bridge”) such that (s, v) and (v, u) are both edges of G. During
the game, the players take turns selecting a vertex u ∈ U with a battery reachable from
the current vertex s. The token is then moved to u, and the battery of u is expended
(removed) to “power” all stations in V connected to u. All vertices in V not connected
to u are removed from the graph, because they are not powered.

Theorem 4 also gives the following simple hypergraph interpretation of Demi-


Quantum Boolean Nim. Recall that a hypergraph H over a groundset V = [n] is a
collection of subsets of V. We write H = (V, E), where for each hyperedge e ∈ E, e ⊆ V.

Ruleset 9 (Hydropipe). The game is defined by a hypergraph H = (V, E) in which each


vertex v ∈ V has one dose of drain cleaner and there is a token representing water
flow on vertex c ∈ V. Players alternate turns moving the (water flow) token to another
vertex v connected to c by a hyperedge (“pipe”), where v still has its drain cleaner. After
moving the token, the cleaner at v is expended to clean all incident pipes. All other
pipes that were not cleaned are clogged and can no longer be cleaned or traversed
through for the rest of the game. A player that cannot reach a drain cleaner loses the
game.

Because both Power Station and Hydropipe are simply graph-theoretical inter-
pretations of the proof for Theorem 4, the proof also provides the following corollary.

Corollary 1. Power Station and Hydropipe are isomorphic to Transverse Wave and
are therefore PSPACE-complete combinatorial games.

4.2 A social influence game motivated by Demi-Quantum Nim


The connection between Demi-Quantum Boolean Nim and Friend Circle motivates
the social-influence-inspired game Demographic Influence, which mathematically
generalizes Demi-Quantum Nim. In a nutshell the setting has a demographic struc-
ture over a population, in which individuals have their own friend circles. Members
in the population are initially uninfluenced but receptive to ads, and (viral marketing)
Transverse Wave: an impartial color-propagation game | 59

influencers try to target their ads at demographic groups to influence the population.
People can be influenced either by influencers’ ads or by “enthusiastic endorsement”
cascading through friend circles.
The following combinatorial game is distilled from the above scenario.

Ruleset 10 (Demographic Influence). A Demographic Influence position is de-


fined by a tuple Z = (G, D, Θ), where
– G = (V, E) is an undirected graph representing a symmetric social network on a
population V.
– D = {D1 , . . . , Dm } is the set of demographics, each a subset of V.
– Θ : V → ℤ represents how resistant each individual is to the product (i. e., their
threshold to being influenced):
– if Θ(v) > 0, then v is uninfluenced,
– if Θ(v) = 0, then v is weakly influenced, and
– if Θ(v) < 0, then v is strongly influenced.

A player’s turn consists of choosing a demographic Dk and the amount they want to
influence, c > 0 where there exists v ∈ Dk such that Θ(v) ≥ c. (Since c > 0, there must
be an uninfluenced member of Dk .)
– Θ(v) decreases by c (all individuals are influenced by c).
– If Θ(v) became negative by this subtraction (if it went from ℤ+ ∪ {0} to ℤ− ), then
Θ(x) = −1 for all x ∈ NG (v).8

Importantly, we perform all the subtractions and determine which individuals are
newly strongly influenced before they go and strongly influence their friends. We in-
clude an example move in Figure 3.8.
Note that when influencing a demographic Dk , since c cannot be greater than the
highest threshold, the highest-threshold individual will not be strongly influenced by
the subtraction step. (If one of their neighbors does get strongly influenced, then they
will be strongly influenced in that manner.)
Since a player needs to make a move on a demographic group with at least one
uninfluenced individual, the game ends when there are no remaining groups to influ-
ence.
The following theorem shows that Demographic Influence generalizes Demi-
Quantum Nim.

Theorem 5 (Demi-Quantum Nim Generalization: Social-Influence Connection). Demo-


graphic Influence contains Demi-Quantum Nim as a Special Case. Therefore Demo-
graphic Influence is a PSPACE-complete game.

8 Thematically, if an individual becomes strongly influenced directly by the marketing campaign,


then they enthusiastically recommend it to their friends and strongly influence them as well.
60 | K. Burke et al.

−1 0 −1 0 −1 0

v1 v1 v1
-2 7 -2 3 -1 3

2 2 2
2 v2 4 -2 v2 4 -2 v2 4

3 4 3 0 -1 0
6 6 -1
v3 v3 v3

a b c

Figure 3.8: A Demographic Influence move, influencing D3 = {v1 , v2 , v3 } by 4. Panel (a) shows G, Θ,
and D3 prior to making the move. Panel (b) shows the first part of the move: subtracting from the
thresholds of v1 , v2 , and v3 . Panel (c) shows the final results of the move: since v2 went negative, its
neighbors are set to −1 to show that they have been strongly influenced as well. (The magnitude of
negativity does not matter, so it is okay that the vertex at −2 “goes back” to −1.)

Proof. For every Demi-Quantum Nim instance Z with m realizations of n piles, we con-
struct the following Demographic Influence instance Z ′ , in which (1) V = {(r, c) |
r ∈ [m], c ∈ [n]}, (2) E = {((r1 , c1 ), (r2 , c2 )) | r1 = r2 } (i. e., vertices from all piles from the
same realization are a clique), (3) for all (r, c) ∈ V, Θ((r, c)) is set to be the number of
pebbles that the cth pile has in the rth realization of Nim, (4) D = (D1 , . . . , Dn ), where
Dc = {(r, c) | r ∈ [m]}, i. e., nodes associated with the cth Nim pile.
We claim that Z and Z ′ are isomorphic games. Imagine the two games are played in
tandem. Suppose the player in Demi-Quantum Nim Z makes move (k, q), removing q
pebbles from pile k. In its Demographic Influence counterpart Z ′ , the corresponding
player also plays (k, q), investing q units in demographic group k. Note that in Z, (k, q)
is feasible if and only if in at least one of the realizations, the kth Nim pile has at least
q pebbles. This is same as q ≤ maxi∈[m] Θ((i, q)). Therefore (k, q) is feasible in Z if and
only if (k, q) is feasible in Z ′ .
When (k, q) is feasible, then for any realization i ∈ [m], there are three cases: (1) if
the kth Nim pile has more pebbles than q, then in that realization a classical transi-
tion f is made, and the q pebbles are removed from the pile. This corresponds to the
reduction of the threshold at node (i, k) by q; (2) if the kth Nim pile has exactly q peb-
bles, then all pebbles are removed from the pile. This corresponds to the case where
node (i, k) becomes weakly influenced; (3) if q is more than the number of pebbles in
the kth Nim pile, then the move collapses realization i. This corresponds to the case in
Demographic Influence, where (i, k) become strongly influenced and then strongly
Transverse Wave: an impartial color-propagation game | 61

influence all other vertices in the same row i. Therefore Z and Z ′ are isomorphic games
with a connection between the collapse of a realization in the quantum version and
the cascading of influence by endorsement in friend circle.

The proof of Theorem 5 illustrates that Demographic Influence can be viewed as


a graph-theoretical extension of Nim. Recall Burke-George’s Neighboring Nim, which
extends both classical Nim (when the underlying graph is a clique) and Undirected
Geography (when all Nim heaps have at most one item in them, i. e., Boolean Nim).
The next theorem complements Theorem 5 by showing that Demographic Influence
also generalizes Node-Kayles.

Theorem 6 (Social-Influence Connection with Node-Kayles). Demographic Influ-


ence contains Node-Kayles as a special case.

Proof. Consider a Node-Kayles instance defined by an n-node undirected graph G0 =


(V0 , E0 ) with V0 = [n]. We define a Demographic Influence instance Z = (G, D, Θ)
as follows. (1) For each v ∈ V0 , we introduce a new vertex tv . Let V = V0 ∪ T0 , where
T0 := {tv | v ∈ V0 }. (2) For all v ∈ V, Θ(v) = 0 and Θ(tv ) = 1. (3) For all v ∈ V,
C(v) = NG (v) ∪ {tw | w ∈ NG (v)} ∪ {tv } and C(tv ) = {v}. (4) D = {D1 , . . . , Dn }, where
Dv = {v, tv } for all v ∈ V.
We show an example of this transformation in Figure 3.9.
Note that because Θ(v) ∈ {0, 1} for all v ∈ V, the space of moves in this Demo-
graphic Influence is {(v, 1) | v ∈ [n]}, whereas in Node-Kayles a move consists of
selecting one of the vertices from [n].
We now show that Demographic Influence over Z is isomorphic to Node-Kayles
over G under the mapping of moves (v, 1) ⇔ v.
For a Node Kayles move at v, it removes v and all x ∈ NG0 (v) from future move
choices. In Demographic Influence, choosing the (v, 1) move means that Θ(tv ) be-
comes 0 and Θ(v) becomes −1. Then v is strongly influenced, and it strongly influences
its neighbors at NG (v) = NG0 (v)∪{tx | x ∈ NG0 (v)}, so all those vertices also get a thresh-
old of −1. Those include both the x and tx vertices for each x ∈ NG0 (v), so it removes all
those neighboring demographics from future moves (x, 1), the set of which is isomor-
phic to those removed from the corresponding Node Kayles move.
Therefore by induction we establish that Demographic Influence over Z is iso-
morphic to Node-Kayles over G under the mapping of moves from v ⇔ (v, 1).

Therefore Demographic Influence simultaneously generalizes Node-Kayles


and Demi-Quantum Nim (which in turn generalizes classical Nim, Avoid True, and
Transverse Wave).
We can also establish that Demographic Influence generalizes Friend Circle
defined in the earlier section.

Theorem 7 (Demographic Influence Generalizes Friend Circle). Demographic In-


fluence contains Friend Circle as a particular case.
62 | K. Burke et al.

x1 x1 0 1 tx1 Dx1

v v 0 1 tv Dv

x2 x2 0 1 tx2 Dx2

Figure 3.9: An example of the reduction from Node Kayles to Demographic Influence.

Proof. For a Friend Circle position, Z = (G, S, w), where G = (V, E), we construct
Demographic Influence instance Z ′ = (G′ , D, Θ) using the following properties.
– For each edge e ∈ E, we create a new vertex ve . Then V ′ = {ve | e ∈ E}.
– E ′ = {(ve1 , ve2 ) | there exists v ∈ V : e1 , e2 both incident to v}.
– G′ = (V ′ , E ′ ).
– D = {Ds }s∈S , where Ds = {ve | e is incident to s}.
– Θ : V ′ → {0, 1}, where if w(e) = f, then Θ(e) = 1; otherwise, if w(e) = t, then
Θ(e) = 0.

In other words, the connections are built on the line graph of the underlying graph
in Friend Circle. Each seed vertex defines the demographic group and associates
with all edges incident to it. Targeting this demographic group influences all these
edges and edges adjacent to t-edges in this set. See Figure 3.10 for an example of the
reduction.
We complete the proof by showing that a play on Friend Circle position s is
isomorphic to playing (Ds , 1) on Demographic Influence, meaning choosing demo-
graphic Ds and investing c = 1. In Friend Circle, playing at s means that for all e
incident to s: (1) w(e) becomes t, and (2) if w(e) was already t, then for all f adjacent
to e, w(f ) becomes t.
In Demographic Influence, the corresponding play (Ds , 1) means that for all
ve ∈ Ds :
– Θ(ve ) is reduced by 1, which corresponds to setting w(e) to t;
– if Θ(ve ) becomes −1, then for all vf ∈ NG′ (ve ), we have that Θ(vf ) also becomes −1;
by our definition of E ′ these vf are exactly those where both w(e) were previously
t and e and f are adjacent in G.

Thus, following analogous moves, w(e) = t if and only if Θ(ve ) ≤ 0. A seed vertex s′ ∈ S
is surrounded by t-edges (and ineligible as a move) exactly when all vertices ve ∈ Ds
Transverse Wave: an impartial color-propagation game | 63

e4
Ds1 1 Dx

s1 f
x
e4 e1 e3
1 1
f e1 f e3

s2 t s3 Ds2 0 Ds3
e2
e2

Figure 3.10: Example of the reduction. On the left, there is a Friend Circle instance. On the right,
there is the resulting Demographic Influence position.

are influenced, also making Ds ineligible as a move. This mapping of moves shows
that the games are isomorphic.

5 Conclusions and future work


One of the beautiful aspects of Winning Ways [3] is the relationships between games,
especially when positions in one ruleset can be transformed into equivalent instances
of another ruleset. As examples, Dawson’s Chess positions are equivalent to Node
Kayles positions on paths, Wythoff’s Nim is the one-queen case of Wyt Queens,
and Subtraction-{1, 2, 3, 4} positions exist as instances of many rulesets, including
Adders and Ladders with one token after the top of the last ladder and last snake.
Transforming instances of one ruleset to another (reductions) is a basic part of
Combinatorial Game Theory,9 just as it is vital to computational complexity. Trans-
verse Wave arose not only by reducing to other things, but more concretely as a par-
ticular case of other games we explored.
“Ruleset A is a special case of ruleset B” (i. e., “B is a generalization of A”) not only
proves that computational hardness of A results in the computational hardness of B,
but also:
– if A is deep, then it is a fundamental part of strategies of B; and
– if rules of B are straightforward, then A can be a basic building block in creating
other fun rulesets.

Relevant special-case/generalization relationships between games presented here in-


clude the following.

9 “Change the Game!” is the title of a section in Lessons in Play, Chapter 1, “Basic Techniques” [1].
64 | K. Burke et al.

– Demi-Quantum Nim is a generalization of Nim. (See Section 3.) (This is true of any
ruleset R and Demi-Quantum R.)
– Demi-Quantum Nim is a generalization of Transverse Wave. (Via Demi-Quan-
tum Boolean Nim, see Section 3.2.)
– Friend Circle is a generalization of both Transverse Wave (Section 2.3) and
Node Kayles (Section 2.2).
– Demographic Influence is a generalization of both Demi-Quantum-Nim and
Friend Circle (Section 4.2).

We show these relationships in a lattice manner in Figure 3.11. Understanding Trans-


verse Wave is a key piece of the two other rulesets, which also include the impartial
classics Nim and Node Kayles.

Demographic
Influence

Demi- Friend
Quantum Nim Circle

Transverse Node
Nim
Wave Kayles

Figure 3.11: Generalization relationships of the rulesets in this paper. A → B means that A is a
particular case of B and B is a generalization of A.

Furthermore, several of the relationships outside of Figure 3.11 that were discussed in
this paper were completely isomorphic, preserving the game values, not just winnabil-
ity (as in Section 1.4). More explicitly, any new findings on the Grundy values for
Transverse Wave also give those exact same results for Crosswise OR, Crosswise
AND, Demi-Quantum Boolean Nim, Avoid True, Power Station, and Hydropipe.
We have been drawn to Transverse Wave not only because it is colorful, ap-
proachable, and intriguing, but also because the relationships with other games have
inspired us to discover more connections among games. Our work offers us a glimpse
of the lattice order induced by special-case/generalization relationships over mathe-
matical games, which we believe is an instrumental framework for both the design and
comparative analysis of combinatorial games. In one direction of this lattice, when
given two combinatorial games A and B, it is a stimulating and creative process to de-
Transverse Wave: an impartial color-propagation game | 65

sign a game with the simplest ruleset that generalizes both A and B.10 For example,
in generalizing both Nim and Undirected Geography, Neighboring Nim highlights
the role of “self-loops” in Graph-Nim. In our work the aim of capturing both Node
Kayles and Demi-Quantum Nim has contributed to our design of Demographic In-
fluence. In the other direction, identifying a well-formulated basic game at the in-
tersection of two seemingly unconnected games may greatly expand our understand-
ing of game structures. It is also a refinement process for identifying intrinsic build-
ing blocks and fundamental games. By exploring the lattice order of game relation-
ships we will continue to improve our understanding of combinatorial game theory
and identify new fundamental games inspired by the rapidly evolving world of data,
networks, and computing.

Bibliography
[1] M. H. Albert, R. J. Nowakowski, and D. Wolfe, Lessons in Play: An Introduction to Combinatorial
Game Theory, A. K. Peters, Wellesley, Massachusetts, 2007.
[2] G. Beaulieu, K. G. Burke, and É. Duchêne, Impartial coloring games, Theoret. Comput. Sci. 485
(2013), 49–60.
[3] E. R. Berlekamp, J. H. Conway, and R. K. Guy, Winning Ways for your Mathematical Plays,
volume 1, A. K. Peters, Wellesley, Massachusetts, 2001.
[4] C. L. Bouton, Nim, a game with a complete mathematical theory, Annals of Mathematics 3(1/4)
(1901), 35–39.
[5] K. Burke, M. Ferland, and S.-H. Teng, Quantum combinatorial games: structures and
computational complexity, CoRR abs/2011.03704, 2020.
[6] K. W. Burke and O. George, A PSPACE-complete graph Nim, in Games of No Chance 5, (2019),
259–270.
[7] K. W. Burke and S.-H. Teng, Atropos: a PSPACE-complete Sperner triangle game, Internet
Mathematics 5(4) (2008), 477–492.
[8] K. W. Burke, Science for Fun: New Impartial Board Games, PhD thesis, USA, 2009.
[9] W. Chen, S.-H. Teng, and H. Zhang, A graph-theoretical basis of stochastic-cascading network
influence: characterizations of influence-based centrality, Theor. Comput. Sci. 824-825 (2020),
92–111.
[10] P. Dorbec and M. Mhalla, Toward quantum combinatorial games, arXiv preprint
arXiv:1701.02193, 2017.
[11] D. Eppstein, Computational complexity of games and puzzles, 2006, http://www.ics.uci.edu/
~eppstein/cgt/hard.html.
[12] S. Even and R. E. Tarjan, A combinatorial problem which is complete in polynomial space,
J. ACM 23(4) (1976), 710–719.
[13] A. S. Fraenkel and D. Lichtenstein, Computing a perfect strategy for n × n chess requires time
exponential in n, J. Comb. Theory, Ser. A 31(2) (1981), 199–214.

10 It is also a relevant pedagogical question to ask when introducing students to combinatorial game
theory.
66 | K. Burke et al.

[14] A. S. Fraenkel, E. R. Scheinerman, and D. Ullman, Undirected edge geography, Theor. Comput.
Sci. 112(2) (1993), 371–381.
[15] M. Fukuyama, A Nim game played on graphs, Theor. Comput. Sci. 1-3(304) (2003), 387-399.
[16] D. Gale, A curious Nim-type game, American Mathematical Monthly 81 (1974), 876–879.
[17] D. Gale, The game of hex and the Brouwer fixed-point theorem, American Mathematical
Monthly 10 (1979), 818–827.
[18] A. Goff, Quantum tic-tac-toe: a teaching metaphor for superposition in quantum mechanics,
American Journal of Physics 74(11) (2006), 962–973.
[19] P. M. Grundy, Mathematics and games, Eureka 2 (1939), 198–211.
[20] D. Kempe, J. Kleinberg, and E. Tardos, Maximizing the spread of influence through a social
network, in Proceedings of the 9th ACM SIGKDD International Conference on Knowledge
Discovery and Data Mining, KDD ’03 (2003), 137–146.
[21] D. Lichtenstein and M. Sipser, Go is polynomial-space hard, J. ACM 27(2) (1980), 393–401.
[22] J. F. Nash, Some Games and Machines for Playing Them, RAND Corporation, Santa Monica, CA
1952.
[23] S. Reisch, Hex ist PSPACE-vollständig, Acta Inf. 15 (1981), 167–191.
[24] M. Richardson and P. Domingos, Mining knowledge-sharing sites for viral marketing, In
Proceedings of the 8th ACM SIGKDD International Conference on Knowledge Discovery and
Data Mining, KDD ’02 (2002), 61–70.
[25] T. J. Schaefer, On the complexity of some two-person perfect-information games, Journal of
Computer and System Sciences 16(2) (1978), 185–225.
[26] R. P. Sprague, Über mathematische Kampfspiele, Tôhoku Mathematical Journal 41 (1935-36),
438–444.
[27] G. Stockman, Presentation: The game of Nim on graphs: NimG, 2004. Available at http:
//www.aladdin.cs.cmu.edu/reu/mini_probes/papers/final_stockman.ppt.
Alda Carvalho, Melissa A. Huggan, Richard J. Nowakowski, and
Carlos Pereira dos Santos
A note on numbers
Abstract: When are all positions of a game numbers? We show that two properties
are necessary and sufficient. These properties are consequences of the fact that, in
a number, it is not an advantage to be the first player. One of these properties implies
the other. However, checking for one or the other, rather than just one, can often be
accomplished by only looking at the positions on the “board”. If the stronger property
holds for all positions, then the values are integers.

1 Introduction
When analyzing games, an early question is: is it possible that all the positions are
numbers? If that is true, then it is easy to determine the outcome of a disjunctive sum
of positions; just add up the numbers. It is also easy to find the best move; just play
the summand with the largest denominator. The problem is how to recognize when all
the positions are numbers.
Siegel [8, Exercise 3.15] states “If every incentive of G is negative, then G is a num-
ber.” This does not provide much insight or intuition. In fact, in most non-all-small-
games, there are nonzero positions, some of which are numbers and others not. Let
S be a set of positions of a ruleset. It is called a hereditarily closed set of positions of
a ruleset (HCR) if it is closed under taking options. These HCR sets are the natural
objects to consider.

Acknowledgement: Alda Carvalho: Project CEMAPRE/REM–UIDB/05069/2020 financed by FCT/


MCTES through national funds.
Melissa A. Huggan: Supported by the Natural Sciences and Engineering Research Council of Canada
(funding reference number PDF-532564-2019).
Richard J. Nowakowski: Supported by the Natural Sciences and Engineering Research Council of
Canada (funding reference number 2019-04914).
Carlos Pereira dos Santos: Partially Supported by FCT – Fundação para a Ciência e Tecnologia, under
the project UIDB/04721/2020.

Alda Carvalho, ISEL–IPL & CEMAPRE/REM–University of Lisbon, Lisbon, Portugal, e-mail:


alda.carvalho@isel.pt
Melissa A. Huggan, Department of Mathematics, Ryerson University, Toronto, Ontario, Canada,
e-mail: melissa.huggan@ryerson.ca
Richard J. Nowakowski, Department of Mathematics and Statistics, Dalhousie University, Halifax,
Nova Scotia, Canada, e-mail: r.nowakowski@dal.ca
Carlos Pereira dos Santos, Center for Functional Analysis, Linear Structures and Applications,
University of Lisbon & ISEL–IPL, Lisbon, Portugal, e-mail: cmfsantos@fc.ul.pt

https://doi.org/10.1515/9783110755411-004
68 | A. Carvalho et al.

There are two properties either of which, if satisfied for all followers of a position,
tells us that the position is a number. Both are aspects of the first-move-disadvantage
in numbers. The first is a comparison with one move against two moves. Consider a
pair of moves for both players (GL , GR ). The property indicates that it is not because
Right chooses GR that Left misses the opportunity to choose GL . After Right’s move to
GR , she has an option, which is at least as good as GL again. This idea of “no loss” is
also present in Definition 1.2.

Definition 1.1 (F1 Property). Let S be an HCR. Given G ∈ S, the pair (GL , GR ) ∈ Gℒ × Gℛ
satisfies the F1 property if there is GRL ∈ GRℒ such that GRL ⩾ GL or there is GLR ∈ GLℛ
such that GLR ⩽ GR .

The second property involves moves by both players.

Definition 1.2 (F2 Property). Let S be an HCR. Given G ∈ S, (GL , GR ) ∈ Gℒ ×Gℛ satisfies
the F2 property if there are GLR ∈ GLℛ and GRL ∈ GRℒ such that GRL ⩾ GLR .

In many games the literal form of positions will tell us whether they satisfy the F1
property or the F2 property with equality. See Section 2 for examples.
When analyzing a new game, there are several results that can be used to gain an
insight into its structure. The next two should be included in this list. If every position
satisfies either property, then the values are numbers. If every position satisfies the F2
property, then the result is stronger. These have already been used in [3] and would
have helped when writing [1, 4]; see the examples in Section 2.

Lemma 3.2. Let S be an HCR. If for any position G ∈ S, all pairs (GL , GR ) ∈ Gℒ × Gℛ
satisfy the F1 property or the F2 property, then all positions G ∈ S are numbers.

Lemma 3.5. Let S be an HCR. If for any position G ∈ S, all pairs (GL , GR ) ∈ Gℒ × Gℛ
satisfy the F2 property, then all positions G ∈ S are integers.

Lemma 3.3 proves that the F2 property implies the F1 property. In practice, how-
ever, it is often easier to prove that at least one of the two properties hold rather than
trying to prove that just the F1 property holds.
We recall the results about numbers needed for this paper.

Theorem 1.3 ([1, 2, 8]). Let G be a number whose options are numbers.
1. After removing dominated options, the form of G has at most one Left option and at
most one Right option.
2. For the options that exist, GL < G < GR .
3. If there is an integer k such that GL < k < GR or if either GL or GR does not exist,
then G is an integer.
4. If both GL and GR exist and the previous case does not apply, then G is the simplest
number between GL and GR .
A note on numbers | 69

The most important point to remember is item 2, that is, when a player plays in a
number the situation gets worse for them. This has an important consequence when
games are being analyzed.

Theorem 1.4 (Number Avoidance Theorem [1, 2, 8]). Suppose that G is a number and H
is not. If Left can win moving first on G + H, then Left can do so with a move on H.

In many cases, when checking the properties, the Left and Right options will re-
fer to two specific moves on the “game board”, one by Left and one by Right. If this
happens, then the actual positions will automatically give the stronger conditions,
GLR ≅ GRL or GRL ≅ GL . Moreover, no calculations are required. Examples are given in
Section 2.

2 Examples and a warning


In these examples, we illustrate that, sometimes, only two specific moves on the “game
board”, one for each player, are sufficient. We will refer to the specific moves by lower
case letters, ℓ for Left and r for Right.
We first sketch a proof to show that the values of polychromatic chomp (see
Appendix for the ruleset), blue-red-hackenbush strings [2, 9] are numbers, and
that cutcake [2] positions are integers. We then give the properties that the follow-
ing games satisfy: domino shave [4], shove [1], push [1], lenres [7], divisors, and
partizan turning turtles (called flipping coins in [3]) (see Appendix for the last
two rulesets). Two games, partizan euclid [5] and partizan subtraction [6], are
examples where many positions satisfy one or both properties. However, since there
are positions that satisfy neither, only a few positions are numbers.

Example 2.1. Let G be a polychromatic chomp position. Let ℓ and r be black and
gray squares, respectively. If neither Gℓ nor Gr eliminates the other, then playing both
moves in either order, results in the same position Q, i. e., Gℓr ≅ Grℓ . Suppose Gℓ elimi-
nates Gr , as illustrated in Figure 4.1. In this case, Left can play her move before or after
Right’s move, i. e., Gℓ ≅ Grℓ .
Hence all (GL , GR ) satisfy one or both properties. Therefore by Lemma 3.2 all poly-
chromatic chomp positions are numbers.

Example 2.2. Let G be a blue-red-hackenbush string, and let ℓ and r be the edges
played by Left and Right, respectively. If r is higher up the string than ℓ, then playing
ℓ eliminates r. Thus Grℓ and Gℓ are identical. Otherwise, playing r eliminates ℓ, and
Gℓr ≅ Gr .
Hence by Lemma 3.2 all blue-red-hackenbush strings are numbers.

Example 2.3. Consider an m × n cutcake position. The moves are not independent
but almost so. For given ℓ and r, consider the pair of options, GL = m × (n − ℓ) + m × ℓ
70 | A. Carvalho et al.

Figure 4.1: F1 argument in polychromatic chomp.

and GR = (m − r) × n + r × n, and their options

GLR = m × (n − ℓ) + (m − r) × ℓ + r × ℓ,
GRL = (m − r) × n + r × (n − ℓ) + r × ℓ.

The moves cannot be interchanged and get the same board position. However, we
know that if i > j, then k × i ⩾ k × j (intuitively, there are more moves for Left in k × i
than in k × j), and similarly i × k ⩽ j × k. The terms of GLR and GRL pair off: r × ℓ is in
both, (m − r) × ℓ ⩽ (m − r) × n and m × (n − ℓ) ⩽ r × (n − ℓ). Therefore GLR ⩽ GRL , and so
(GL , GR ) satisfies the F2 property. Therefore by Theorem 3.5 G is an integer.

Example 2.4.
1. Given a shove position G, if any token is pushed off the end of the strip, then
(Gℓ , Gr ) satisfies the F1 property; if not, then (Gℓ , Gr ) satisfies the F2 property.
2. Given a push position G, if any token pushes the other, then (Gℓ , Gr ) satisfies the
F1 property; if not, then (Gℓ , Gr ) satisfies the F2 property.
3. Given a lenres position G, if any digit in the move replaces the other, then (Gℓ , Gr )
satisfies the F1 property; if not, then (Gℓ , Gr ) satisfies the F2 property.
4. Given a domino shave position G, (GL , GR ) ∈ Gℒ × Gℛ satisfies the F1 property.
5. Given a divisors position G = (l, r) and a pair of options, (GL , GR ) = ((ℓ′ , r), (ℓ, r ′ )),
if ℓ′ = r or ℓ = r ′ , then (GL , GR ) satisfies the F1 property; if not, then (GL , GR )
satisfies the F2 property.
A note on numbers | 71

6. Given a partizan turning turtles position G, if GL and GR conflict, then (GL , GR )


satisfies the F1 property; if GL and GR do not conflict, then (GL , GR ) satisfies the F2
property.

Even more can be said about blue-red-cherries [1] and erosion [1]. These games
all satisfy the F2 property, and thus by Theorem 3.5 they are integers.

Example 2.5.
1. Given a blue-red-cherries position G, all (GL , GR ) ∈ Gℒ × Gℛ satisfy the F2 prop-
erty. If Left removes a cherry from one end (ℓ) and Right removes a cherry from the
other end (r), then Gℓr ≅ Grℓ .
2. Given an erosion position G, all (GL , GR ) ∈ Gℒ ×Gℛ satisfy the F2 property. This is
vacuously true since by the rules it is impossible for both players to have options
at the same time.

For the values all to be numbers, the properties must always be true. It is not suf-
ficient for most of the positions to satisfy them. Two games that have 𝒩 -positions but
where many of the positions naturally satisfy one or the other property are the follow-
ing.
1. F1: In partizan euclid [5] with G = (p, q) and p > 2q, GLR = GR or GRL = GL .
2. F2: In the partizan subtraction subset of splittles [6], let a be the largest that
can be taken. Suppose the heap size is n ⩾ 2a; then Left taking ℓ and Right taking
r results in a heap of size n − ℓ − r regardless of the order. Thus Gℓr ≅ Grℓ .

3 Proofs
Theorem 3.1 is the central theoretical result: All the positions in an HCR set are
numbers if and only if there is no position and no number such that the sum is
an 𝒩 -position. This is all that is required to prove Lemma 3.2. Lemma 3.3 shows that
the F2 property implies the F1 property with strict inequality. In fact, Lemma 3.2 may
be written as a necessary and sufficient condition. This is Theorem 3.4, which only
uses the F1 property. Theorem 3.5 shows that if every position satisfies the F2 property,
then the numbers will be integers.

Theorem 3.1 (Outcomes and numbers). Let S be an HCR. All positions G ∈ S are num-
bers if and only if there is no G ∈ S and a number x such that G + x ∈ 𝒩 .

Proof. (⇒) If all positions G ∈ S are numbers, then, regardless of what the numbers x
are, all G + x are numbers. Hence there is no G ∈ S and a number x such that G + x ∈ 𝒩 .
(⇐) Let G ∈ S. If Gℒ = 0 or Gℛ = 0, then G is an integer. Suppose that Gℒ ≠ 0 and
Gℛ ≠ 0. By induction, since S is hereditarily closed, all GL ∈ Gℒ and GR ∈ Gℛ are
numbers. Hence, after removing dominated options, there are three possible cases:
72 | A. Carvalho et al.

(1) G = {a | a} = a + ∗, where a is a number;


(2) G = {a | b} = a+b
2
± a−b
2
, where a and b are numbers, and a > b;
(3) G = {a | b}, where a and b are numbers, and a < b.

If (1) or (2), then G − a ∈ 𝒩 or G − a+b


2
∈ 𝒩 , contradicting the assumptions. Therefore
we must have (3). Now G is the simplest number strictly between a and b.
In all cases, G is a number, and the theorem follows.

Now a natural question arises: Is it easy, in practice, to know if an HCR has no


positions such that G + x ∈ 𝒩 ? In other words, is Theorem 3.1 useful? Lemma 3.2
answers that question.

Lemma 3.2. Let S be an HCR. If for any position G ∈ S, all pairs (GL , GR ) ∈ Gℒ × Gℛ
satisfy the F1 property or the F2 property, then all positions G ∈ S are numbers.

Proof. By Theorem 3.1 it suffices to prove that if all pairs (GL , GR ) ∈ Gℒ × Gℛ satisfy
the F1 property or the F2 property, then there is no G ∈ S and a number x such that
G + x ∈ 𝒩.
For the contrapositive, suppose that there are a position G ∈ S and a number x
such that G + x ∈ 𝒩 . Assume that the birthday of G in such conditions is the smallest
possible.
Since G + x ∈ 𝒩 , there are GL + x ⩾ 0 and GR + x ⩽ 0 (Theorem 1.4). Due to the
hypothesis, the pair (GL , GR ) satisfies the F1 property or the F2 property. If the pair
satisfies the F1 property, then there is GRL such that GRL ⩾ GL , or there is GLR such
that GLR ⩽ GR . If the first happens, then GRL ⩾ GL implies GRL + x ⩾ GL + x ⩾ 0.
That is incompatible with GR + x ⩽ 0. If the second happens, then GLR ⩽ GR implies
GLR + x ⩽ GR + x ⩽ 0. That is incompatible with GL + x ⩾ 0. In either case, we have a
contradiction; the pair (GL , GR ) cannot satisfy the F1 property.
Hence the pair (GL , GR ) satisfies the F2 property, and there are GLR ∈ S and GRL ∈ S
such that GLR ⩽ GRL . Since GL + x ⩾ 0, we have GLR + x ⩽̸ 0. Also, since GR + x ⩽ 0,
we have GRL + x ⩾̸ 0. The second inequality allows us to conclude that GLR + x ⩾̸ 0
because GLR ⩽ GRL . However, GLR + x ⩽̸ 0 and GLR + x ⩾̸ 0, implies that GLR + x ∈ 𝒩 ,
contradicting the smallest rank assumption. Therefore the pair (GL , GR ) cannot satisfy
the F2 property.
The pair (GL , GR ) does not satisfy the F1 property or the F2 property, which contra-
dicts the hypothesis. There is no G ∈ S and number x such that G + x ∈ 𝒩 . Therefore
all positions G ∈ S are numbers.

It is possible to have a pair of options that satisfies the F2 property without satis-
fying the F1 property; an example of that is a pair like (GL , GR ) = ({0, ∗ | ∗}, {∗ | 0, ∗}).
However, the options of ∗ do not satisfy the F2 property. On the other hand, if all fol-
lowers also satisfy the F2 property, then the next lemma shows that all pairs satisfy
the F1 property.
A note on numbers | 73

Lemma 3.3. Let S be an HCR such that, given any position G ∈ S, all pairs (GL , GR ) ∈
Gℒ × Gℛ satisfy the F2 property. Then all pairs satisfy the F1 property.

Proof. It suffices to prove that if a pair (GL , GR ) ∈ Gℒ × Gℛ satisfies the F2 property,


then it also satisfies the F1 property.
Suppose that a pair (GL , GR ) satisfies the F2 property. If so, then by definition there
are GLR and GRL such that GLR ⩽ GRL . Since GLR is a right option of GL , we have
GLR ⩽̸ GL . On the other hand, by Lemma 3.2 all positions of S are numbers, so GLR can-
not be incomparable with GL . Therefore we must have GLR > GL , and consequently
GRL ⩾ GLR > GL . Thus (GL , GR ) satisfies the F1 property.

Theorem 3.4. Let S be an HCR. All positions G ∈ S are numbers if and only if for any
position G ∈ S, all pairs (GL , GR ) ∈ Gℒ × Gℛ satisfy the F1 property.

Proof. The reverse direction is a consequence of Lemma 3.2.


For the forward direction, let G ∈ S and (GL , GR ) ∈ Gℒ × Gℛ . Since G is a number,
G < GR , and thus GL − GR < 0. Since Right, playing first, wins GL − GR , either there is
L

GLR with GLR − GR ⩽ 0 or some GRL with GL − GRL ⩽ 0. Hence GLR ⩽ GR or GL ⩽ GRL ,
and thus by definition (GL , GR ) satisfies the F1 property.

By Lemma 3.3, if all pairs (GL , GR ) satisfy the F1 property or the F2 property, then
all pairs satisfy the F1 property. That means that a pair satisfying the F2 property also
satisfies the F1 property. However, observe that the opposite is not true: it is possible to
have a pair satisfying the F1 property without satisfying the F2 property. For example,
if G = 21 = {0 | 1} (canonical form), then the pair (0, 1) satisfies the F1 property and does
not satisfy the F2 property because GLℛ = 0. The F2 property is a stronger condition
and has a surprising consequence.

Theorem 3.5. Let S be an HCR. If for any position G ∈ S, all pairs (GL , GR ) ∈ Gℒ × Gℛ
satisfy the F2 property, then all positions G ∈ S are integers.

Proof. Let G ∈ S. If Gℒ = 0 or Gℛ = 0, then G is an integer, and the theorem holds. Sup-


pose that Gℒ ≠ 0 and Gℛ ≠ 0. By Lemma 3.2 all positions G ∈ S are numbers, and the
canonical form is G = {GL | GR }. By induction, GL and GR are integers. Since (GL , GR )
satisfies the F2 property, there is GLR ⩽ GRL , and by induction both are integers. Let
GRL = k. Therefore we have that GL < k and GR > k, and thus G is an integer.

Observation 3.6. Theorem 3.5 exhibits a sufficient but not necessary condition. Con-
sider S, an HCR whose game forms are {−2 | 0}, −2, −1, and 0 (the last three canonical
forms). Of course, all game values of S are integers. However, regarding G = {−2 | 0},
the pair (GL , GR ) = (−2, 0) does not satisfy the F2 property.
74 | A. Carvalho et al.

Appendix. Rulesets
divisors

Position: An ordered pair of positive integers (l, r).

Moves: Left is allowed to replace (l, r) by (l′ , r), where l′ < l is a divisor of r. Right is
allowed to replace (l, r) by (l, r ′ ), where r ′ < r is a divisor of l.

L R L
(5, 4) → (2, 4) → (2, 1) → (1, 1)

partizan turning turtles

Position: A line of turtles. A turtle may be on its feet or on its back.

Moves: Left is allowed to choose two upside-down turtles and turn them onto its feet.
Right is also allowed to choose a pair of turtles, provided that the leftmost is on its feet
and the other is on its back; his move is turning over both turtles.

polychromatic chomp

Position: A grid with one poison square in the lower left corner. Besides the poison
square, each square is either black or gray.

Moves: On her turn, Left chooses a black square and removes it and all other squares
above or to the right of it. On his turn, Right moves analogously, but he has to choose
a gray square.

Bibliography
[1] M. H. Albert, R. J. Nowakowski, and D. Wolfe, Lessons in Play: An Introduction to Combinatorial
Game Theory, A. K. Peters, Wellesley, MA, 2007.
A note on numbers | 75

[2] E. R. Berlekamp, J. H. Conway, and R. K. Guy, Winning Ways for Your Mathematical Plays,
Academic Press, London, 1982.
[3] A. Bonato, M. A. Huggan, and R. J. Nowakowski, The game of Flipping Coins, arXiv:2102.13225.
[4] A. Carvalho, M. A. Huggan, R. J. Nowakowski, and C. P. dos Santos, Ordinal Sums, Clockwise
Hackenbush, and Domino Shave, Integers, to appear.
[5] N. A. McKay and R. J. Nowakowski, Outcomes of partizan Euclid, in Integers 12B (2012/13):
Proceedings of the Integers Conference 2011, Paper No. A9, 15 pp.
[6] G. A. Mesdal, Partizan Splittles, in Games of No Chance 3, Cambridge Univ. Press, 2009,
447–461.
[7] A. A. Siegel, On the Structure of Games and their Posets, Ph. D. thesis, Dalhousie University,
2011.
[8] A. N. Siegel, Combinatorial Game Theory, American Math. Soc., Providence, RI, 2013.
[9] T. van Roode, Partizan Forms of Hackenbush Combinatorial Games, M. Sc. Thesis, University of
Calgary, 2002.
Alda Carvalho, Melissa A. Huggan, Richard J. Nowakowski, and
Carlos Pereira dos Santos
Ordinal sums, clockwise hackenbush, and
domino shave
Dedicated to Elwyn R. Berlekamp, John H. Conway, and Richard K. Guy; they taught us so much

Abstract: We present two rulesets, domino shave and clockwise hackenbush.


The first is somehow natural and has, as special cases, stirling shave and Het-
yei’s Bernoulli game. Clockwise hackenbush seems artificial, yet it is equivalent to
domino shave. From the pictorial form of the game and a knowledge of hackenbush,
the decomposition into ordinal sums is immediate. The values of clockwise blue-
red hackenbush are numbers, and we provide an explicit formula for the ordinal
sum of numbers where the literal form of the base is {x | } or { | x}, and x is a number.
That formula generalizes van Roode’s signed binary number method for blue-red
hackenbush.

1 Introduction
Hackenbush is a central game in Winning Ways [4]. It has many interesting properties.
The relationship between the ordinal sum decomposition and the valuation scheme
for paths and trees is central in this paper. The literature also includes variants with
new intriguing properties in new contexts. For example, yellow-brown hackenbush
[3] and all-small games, hackenbush sprigs [12] and misère games, and toppling
dominoes [7] and hot games.

Acknowledgement: Alda Carvalho is a CEMAPRE member and has the support of Project CEMAPRE –
UID/MULTI/00491/2019 financed by FCT/MCTES through national funds.
Melissa A. Huggan was supported by the Natural Sciences and Engineering Research Council of Canada
(funding reference number PDF-532564-2019).
Richard J. Nowakowski was supported by the Natural Sciences and Engineering Research Council of
Canada (funding reference number 2019-04914).
Carlos Santos is a CEAFEL member and has the support of UID/MAT/04721/2019 strategic project.

Alda Carvalho, ISEL–IPL & CEMAPRE–University of Lisbon, Lisbon, Portugal, e-mail:


acarvalho@adm.isel.pt
Melissa A. Huggan, Department of Mathematics, Ryerson University, Toronto, Ontario, Canada,
e-mail: melissa.huggan@ryerson.ca
Richard J. Nowakowski, Department of Mathematics and Statistics, Dalhousie University, Halifax,
Nova Scotia, Canada, e-mail: r.nowakowski@dal.ca
Carlos Pereira dos Santos, Center for Functional Analysis, Linear Structures and Applications,
University of Lisbon & ISEL–IPL, Lisbon, Portugal, e-mail: cmfsantos@fc.ul.pt

https://doi.org/10.1515/9783110755411-005
78 | A. Carvalho et al.

In this paper, we introduce two rulesets, clockwise hackenbush and domino


shave. The first is a new variant of hackenbush trees, and the second is the partizan
version of stirling shave.
We first provide a complete solution for clockwise blue-red hackenbush. As
in blue-red hackenbush trees, the best moves are the ones highest up the tree; see
Lemma 2.3. We then give a method for calculating the value of a position. This is
accomplished by giving a decomposition theorem in terms of ordinal sums (Theo-
rem 2.2). In Theorem 1.3 explicit formulas are given for the ordinal sum of numbers
when the base is a blue-red hackenbush string or when the base is in canonical form
[14]. In contrast, the evaluation of a tree in blue-red hackenbush involves iterating
ordinal sums via signed binary numbers and disjunctive sums.
One of the main contributions of this paper is Theorem 2.16, which gives the for-
mula for the ordinal sum of numbers where the literal form of the base is {x | } or { | x},
and x is a number.
Whereas clockwise hackenbush may seem a little artificial, domino shave
seems natural. It is a partizan version of stirling shave [8], which, in turn, was
suggested by Hetyei’s Bernoulli game [9, 10]. The main result in Section 3 is that
clockwise hackenbush and domino shave are equivalent games. Moreover, a po-
sition in one can be easily transformed to a position in the other. As an interesting
sidelight, we also show that Hetyei’s Bernoulli game is an instance of stirling shave
thereby giving the first complete analysis of the game.

1.1 The rules of the games


A clockwise hackenbush position is a tree with blue, red, and green edges, which
are connected to the ground. The rightmost edges form the trunk, and the players can
only remove edges from the trunk. There are two players, Left and Right. On Left’s turn,
she may remove a blue or green edge from the trunk. On Right’s turn, he may remove
a red or green edge from the trunk. Afterward, any edge not connected to the ground
is also removed. In the figures, blue edges are denoted by solid lines, and red edges by
dashed lines.
We draw the trunk vertically. Figure 5.1 and Figure 5.2 show two clockwise blue-
red hackenbush positions and their options. Note that as play progresses, a branch
that was not on the trunk can become part of the trunk (the trunk shifts clockwise).
See the first Left option in Figure 5.1 and Figure 5.2. Different drawings of the same
hackenbush tree could result in different trunks and are therefore different clock-
wise hackenbush positions.
Domino shave, not surprisingly, involves dominoes. For us, a domino is an or-
dered pair of nonnegative integers, written d = (l, r). We will distinguish the num-
bers: l is the left spot, and r is the right spot. A line of k dominoes will be described as
Ordinal sums, clockwise hackenbush, and domino shave | 79

Figure 5.1: A clockwise blue-red hackenbush position.

Figure 5.2: A second clockwise blue-red hackenbush position.

d1 , d2 , . . . , dk or as (l1 , r1 ), (l2 , r2 ), . . . , (lk , rk ), 0 ⩽ li , ri . A domino is blue if li < ri , it is red


if li > ri , and green if li = ri .
A domino shave position is a line of dominoes. The two players take turns making
moves. On Left’s move, she may remove a green or blue domino di and all the others
with greater index leaving d1 , d2 , . . . , di−1 , provided that, for all j ⩾ i, li ⩽ lj and li ⩽ rj .
On Right’s move, he may remove a green or red domino di leaving d1 , d2 , . . . , di−1 , pro-
vided that, for all j ⩾ i, lj ⩾ ri and rj ⩾ ri . In words, Left has a legal move at (li , ri ) if li is
smaller than or equal to ri and in comparing li to all the dominoes to its right, li is less
than or equal to every lj and rj for every j ≥ i. If this condition is not satisfied for every
domino of index j, then the move from (li , ri ) is not legal for Left. Similarly for Right.
See Figure 5.3 for an example of a domino shave position and its options; the latter
are indicated in the figure.

Figure 5.3: Example of a domino shave position.

For this paper, normal play is the winning convention. The readers can consult any
edition of Winning Ways [4], specifically the sections on hackenbush, to gain further
insight. We assume general knowledge about normal play, but to keep the material
self-contained, we clarify some ideas about the concepts of ordinal sum and, also, the
particular case of ordinal sums of blue-red hackenbush strings.
80 | A. Carvalho et al.

1.2 Ordinal sum


In a blue-red hackenbush string, if a player moves on the bottom, then the top dis-
appears; if a player moves on the top, then nothing happens to the bottom. This idea
motivates the concept of the ordinal sum. In the ordinal sum of two games G : H, a
player may move in either G (base) or H (subordinate) with the additional constraint
that any move on G completely annihilates the component H. The recursive definition
is

󵄨
G : H = {Gℒ , G : H ℒ 󵄨󵄨󵄨 Gℛ , G : H ℛ }.

The Colon Principle states that the form of the base matters, but not the form of the
subordinate. Formally,

Colon Principle ([4]). If H ⩾ H ′ , then G : H ⩾ G : H ′ .

Note that although it is true that H = H ′ implies G : H = G : H ′ , it is not true that


G = G′ implies G : H = G′ : H. For example, G = {0 | 2} and G′ = {0 | } are different
forms with game value 1, and we have G : 1 = 1 21 and G′ : 1 = 2.
In fact, we can be more precise about the role of the base of an ordinal sum. The
following theorem shows that the problem only happens if the literal form of the base
has reversible options. If it has no reversible options, then we can replace the literal
form of the base by its canonical form without changing the game value.

Theorem 1.1 (McKay’s theorem). If G has no reversible options and K is the canonical
form of G, then G : H = K : H.

Proof. See [11], p. 42.


Here we will prove some results about ordinal sums of the form {G | } : H. In those
ordinal sums, G is not the base; the base is {G | }. Also, as stated in Theorem 1.2, the
game value of {G | } : H does not depend on the game form of G.

Theorem 1.2. Let G, G′ , and H be game forms. If G = G′ , then {G | } : H = {G′ | } : H.

Proof. Suppose that in the game {G | } : H + { | −G′ } : (−H), Right moves to {G | } : H R +


{ | −G′ } : (−H) or to {G | } : H + { | −G′ } : (−H L ). Then Left answers {G | } : H R + { | −G′ } :
(−H R ) or {G | } : H L + { | −G′ } : (−H L ), respectively, and, by induction, she wins. On the
other hand, if Right moves to {G | } : H − G′ , then Left replies G − G′ and wins, since
G = G′ . Analogously, if Left plays first in {G | } : H + { | −G′ } : (−H), then she loses.
Hence {G | } : H + { | −G′ } : (−H) is a 𝒫 -position, and {G | } : H = {G′ | } : H.
Ordinal sums, clockwise hackenbush, and domino shave | 81

1.3 Ordinal sums of blue-red hackenbush strings


It is known that the game values of blue-red hackenbush strings are numbers and
that there is a correspondence between the game values of blue-red hackenbush
strings and signed binary representations [14].
The part of a blue-red hackenbush string after the first color change is repre-
sented by the digits after the binary point, whose value is a sum of powers of 2. So,
when we write n.1 11 . . . (the overlines indicate negative powers of 2), the represented
value is n − 21 − 41 + 81 + ⋅ ⋅ ⋅. The signed binary notation is particularly appropriate for
simultaneously describing the game value of the blue-red hackenbush string and its
sequence of blue and red edges. In the following example, 2.1 11 stands for two blue
edges, one red edge, one red edge, and one blue edge; see Figure 5.4.

Figure 5.4: Example of signed binary notation for a blue-red hackenbush string.

Also, if G and H are two blue-red hackenbush strings, it is possible to have a closed
formula to evaluate the game value of G : H knowing the game values of G and H. That
is van Roode’s method [14].

Theorem 1.3 (van Roode’s method). Let G be a positive blue-red hackenbush string
whose game value is n + d with −1 < d = − 2kj ⩽ 0. Then we have the following.
1. If G is an integer and H is a positive blue-red hackenbush string, then G : H =
G + H.
2. If G is not an integer and H is a positive blue-red hackenbush string whose game
value is m + d′ , where m is a positive integer and −1 < d′ ⩽ 0, then G : H = n + d +
1
2j+m
(2m − 1 + d′ ).
3. If H is a negative blue-red hackenbush string whose game value is m + d′ , where
1
m is a negative integer and 0 ⩽ d′ < 1, then G : H = n + d + 2j+|m| (1 − 2|m| + d′ ).

Proof. This result is well known and follows from either Berlekamp’s or van Roode’s
rule for a blue-red hackenbush string [14] and [2].
82 | A. Carvalho et al.

Example 1.4. Consider the games G and H as follows:

3 5
G= = 8
=1− 8
H= = 3 21 = 4 − 1
2

5 1
n = 1, d = − ,j=3 m = 4, d′ = −
23 2

We want to evaluate G : H,

G:H= .

We have, by Case 2 of Theorem 1.3,

1 5 1 1 125
G :H =n+d+ (2m − 1 + d′ ) = 1 − + 7 (24 − 1 − ) = .
2j+m 8 2 2 256

Example 1.5. Consider the games G and H as follows:

G= = 2 85 = 3 − 3
8
H= = −1 43 = −2 + 1
4

3 1
n = 3, d = − ,j=3 m = −2, d′ =
23 4
Ordinal sums, clockwise hackenbush, and domino shave | 83

We want to evaluate G : H,

G:H= .

We have, by Case 3 of Theorem 1.3,

1 3 1 1 69
G :H =n+d+ (1 − 2|m| + d′ ) = 3 − + 5 (1 − 22 + ) = 2 .
2j+|m| 8 2 4 128

Van Roode’s method was conceived to evaluate ordinal sums of blue-red hack-
enbush strings. However, since blue-red hackenbush strings have no reversible op-
tions,1 by Theorem 1.1 this method can be used to evaluate an ordinal sum of numbers
where the base is in canonical form.

2 The analysis of clockwise hackenbush


To analyze clockwise hackenbush positions and facilitate the proofs, it is important
to have notation for the important elements.

Definition 2.1. Let G be a clockwise blue-red hackenbush position. Let TG be the


trunk of G with V(TG ) = {s0 , s1 , . . . , sn } and E(TG ) = {t1 , t2 , . . . , tn }, all labeled from bot-
tom to top. Let Gi be the position resulting from the deletion of ti let B1 = G1 , and let
Bi = Gi \ (Gi−1 ∪ {ti−1 }) for i > 1. Finally, let Mi = Bi ∪ {ti } for i ⩾ 1.

The subtree B1 is the part of the tree remaining after deleting t1 , and, for i > 1, not
counting with ti , Bi is the part of the tree that is eliminated by deleting ti−1 but not by

1 In fact, given a blue-red hackenbush string G and a Left option GL = {. . . | GLR , . . .}, we cannot
have G ⩾ GLR , since in the game G − GLR , Right wins by moving to GLR − GLR . A similar argument holds
for the lack of reversible Right options.
84 | A. Carvalho et al.

deleting ti or, in other words, the subtree above ti−1 that does not include ti . The idea
is represented in Figure 5.5.

Figure 5.5: Notation for the elements of a clockwise hackenbush position.

Theorem 2.2. Let G be a clockwise blue-red hackenbush position. Then G = M1 :


(M2 : (. . . : (Mn−1 : Mn ) . . .)).

Proof. The proof follows by induction on the size of G. If E(TG ) = {t1 }, then G = M1 .
We may now suppose that E(TG ) = {t1 , t2 , . . . , tn } and n > 1. Let H be the position
formed by G \ M1 , that is, the tree above but not including t1 , and the vertex s1 is the
ground. The trunk of H is {t2 , t3 , . . . , tn }.
In G, there are two types of moves. Either in M1 , delete t1 leaving B1 ; or delete ti ,
i > 1, which is a move in H. By induction, the move in H is to M1 : H L (M1 : H R ) for
some Left (Right) option of H. Also by induction, H = M2 : (. . . : (Mn−1 : Mn ) . . .). Then
it follows that

G = {M1L , M1 : H ℒ | M1R , M1 : H ℛ }
= M1 : (M2 : (. . . : (Mn−1 : Mn ) . . .)),

and the result is proved.

Theorem 2.2 shows that we will have to evaluate ordinal sums. If the values
were arbitrary, then no formula could be given. However, clockwise blue-red hack-
enbush positions have similar strategic features to blue-red hackenbush strings.
Specifically, for either player, the unique best move is their highest, and the value is a
number. This we prove next. Each Mi has only one option, that of deleting the trunk
Ordinal sums, clockwise hackenbush, and domino shave | 85

edge. Therefore the ordinal sums will be of the form {x | } : y or { | x } : y for numbers x
and y. A closed formula for this type of ordinal sum is one of the main contributions
of this paper (Subsection 2.2). Before that, we prove that clockwise blue-red hack-
enbush positions only have numbers as game values and that the best options for the
players are the topmost allowed moves.

Lemma 2.3. Let G be a clockwise blue-red hackenbush position. If ti and tj are blue
edges and j > i, then Gj > Gi . If ti and tj are red edges and i < j, then Gj < Gi .

Proof. We first assume that ti and tj are both blue edges and show that Gj − Gi > 0.
The proof follows by induction on the number of edges in G. If G consists of exactly
two blue edges, then G = 2. If Left deletes the higher edge, then this leaves a tree with
exactly one blue edge which has value 1. If she deletes the lower edge, then this leaves
a tree with zero edges, and it has value 0. Thus the lemma holds for the base case. We
now suppose G has more than two edges.
Left, going first, can win by deleting ti in Gj since this results in Gi − Gi = 0.
Now consider Right moving first. If Right plays an edge of Gj but does not eliminate
the edge ti , then Left responds in Gj by deleting ti . Again, this results in Gi − Gi = 0. If
Right plays in Gj and does eliminate ti , then he has deleted an edge on the trunk, i. e.,
some tℓ , ℓ < i. This leaves Gℓ −Gi . Left responds in −Gi by deleting tℓ that, by symmetry,
is a blue edge. This gives Gℓ − Gℓ = 0.
The last remaining case is that Right deletes an edge on the new trunk in −Gi . Let
the trunk of Gi be T1 = {t1′ , t2′ , . . . , tm

} where ta′ = ta for 1 ⩽ a ⩽ i − 1. Right deletes tℓ′
for i ⩽ ℓ ⩽ m. We claim that deleting ti in Gj is a winning move. To see this, let H be
identical to Gi but with an extra blue edge tm+1 ′
at the top of T1 . After Right has deleted
tℓ in −Gi and Left ti in Gj , the situation is identical to playing in Hm+1 − Hℓ . Now both

ti and tj are not in H, and thus H has at least one fewer edge than G. It follows by
induction that Hm+1 − Hℓ > 0.
The proof for when ti and tj are both red edges follows from considering negatives.
The proof is similar to the above arguments.

Corollary 2.4. Let G be a clockwise blue-red hackenbush position.


1. Left’s (Right’s) move of deleting the topmost blue (red) edge on the trunk dominates
all other options.
2. The value of G is a number.

Proof. Part 1 follows immediately from Lemma 2.3. Part 1 gives that G has only one Left
and one Right undominated option, i. e., G = {GL | GR }. By induction on the options
both options GL and GR are numbers. Let H be G with an extra blue edge on the top of
the trunk. Both G and GL are Left options of H, and GL < G by Lemma 2.3. Similarly,
by adding a red edge we have G < GR . Since GL < GR , G is a number.
86 | A. Carvalho et al.

2.1 Simplicity rule and binary notation


Iterated ordinal sums occur naturally in clockwise hackenbush, and the goal of this
section is to find a procedure that evaluates them. In what follows, recall that the form
of the base is important. For example, let n be a number and consider the ordinal sum
{n | } : 2. The good moves are the topmost, and thus

{ n | } : 2 = {{ n | } : 1 | } = {{{ n | } | } | }.

Similarly,

{ n | } : −2 = { n | { n | } : −1} = { n | { n | {n | }}}.

In either case, the Simplicity Rule must be applied three times in a row, and one of the
options remains the same. This motivates the following definition.
If a and b are numbers and a < b, then the value of {a | b} is the dyadic rational
p/2 with a < p/2q < b, and q is minimal. In other words, we will often use the fact
q

{a | b} is the number c, a < c < b, that has the fewest number of digits in its binary expansion.

The next result makes explicit the simplicity rule for evaluating {a | b} for numbers
0 ⩽ a < 1 and a < b. We will then generalize the rule for iterated ordinal sums in Sec-
tion 2.2. The procedure will use the binary expansions of numbers. Each dyadic only
has a finite number of nonzero bits in its binary expansion; however, the procedure
sometimes uses 0-bits past the last 1-bit. Therefore, although we denote the binary
expansion of d by d =2 0.d1 d2 . . . dn , when we refer to the “first index” or “first occur-
rence”, we may consider the infinite binary expansion. We abuse the “=2 ” notation
to mean that the important terms following the equal sign will be in binary. If this is
followed by another “=” sign, then we have reverted to base 10.

Theorem 2.5. Let d be a dyadic rational such that 0 < d < 1 and d =2 0.d1 d2 . . . dn . Let
d < d′ ⩽ +∞, and if d′ < 1, then let d′ =2 0.d1′ d2′ . . . dm ′
.
1. If d > 1, then {d | d } = 1.
′ ′

2. If d′ = 1 and i is the index of the first 0-bit of the binary expansion of d, then
{d | 1} = 1 − 21i .
3. If d′ < 1, then let i be the first index such that di = 0 and di′ = 1. Also, let j be the
least index, j > i, and dj = 0.
If d′ =2̸ 0.d1 d2 d3 . . . di−1 1, then {d | d′ } =2 0.d1 d2 d3 . . . di−1 1.
If d′ =2 0.d1 d2 d3 . . . di−1 1, then {d | d′ } =2 0.d1 d2 d3 . . . dj−1 1.

Example 2.6. Demonstrating how to apply Theorem 2.5:

21 󵄨󵄨 45 11
󵄨󵄨 64 } =2 {0.10101 | 0.101101} =2 0.1011 = 16 ;
{ 󵄨󵄨
32
Ordinal sums, clockwise hackenbush, and domino shave | 87

75 󵄨󵄨 19
󵄨󵄨 } =2 {0.1001011 | 0.10011} =2 0.10010111 = 151 .
{
128 󵄨󵄨 32 256

Proof of Theorem 2.5. The first case is trivial.


In the second case, by definition {d | 1} > d; then each of the first i − 1 digits of
the binary expansion of {d | 1} must be ones. Therefore {d | 1} ⩾ 2kj with j ⩾ i. Observe
now that inserting one more “1” in the position i produces a dyadic strictly larger than
d and strictly smaller than 1. Therefore the simplest dyadic that fits between d and 1 is
1 − 21i =2 0.11 . . . 11.
Regarding the third case, since d < {d | d′ } < d′ , the first i − 1 digits of the binary
expansion of {d | d′ } must be d1 , d2 , d3 , . . . , di−1 . Therefore {d | d′ } = 2kw for some k and
w ⩾ i. If d′ =2̸ 0.d1 d2 d3 . . . di−1 1, then 0.d1 d2 d3 . . . di−1 1 is the simplest dyadic that fits
between d and d′ . If d′ =2 0.d1 d2 d3 . . . di−1 1, then since d < {d | d′ } < d′ , the first j − 1
digits of the binary expansion of {d | d′ } must be d1 , d2 , d3 , . . . , dj−1 . In that case the
simplest dyadic that fits between d and d′ is 0.d1 d2 d3 . . . dj−1 1.

We have seen that there are two types of ordinal sums that occur in clockwise
blue-red hackenbush. We write the formulas explicitly. The first, in Theorem 1.3, is
standard and appears in the analysis of blue-red hackenbush strings. The second
happens when the literal form of the base is {x | } or { | x}, where x is a number. That is
analyzed in the next section.

2.2 Ordinal sums of numbers: the literal form of the base is {x | }


or { | x}, where x is a number
The second type of ordinal sum that occurs in clockwise blue-red hackenbush is
{d | } : m. It still involves numbers, but the base is not in canonical form. Some prelim-
inary results are needed first.
If n is a number, then the Translation Principle states

{Gℒ + n | Gℛ + n} = n + {Gℒ | Gℛ }

[1, 4, 6, 13]. The following theorem describes a version of the translation principle for
ordinal sums. Once we have this result, the case { d | }: number (0 ⩽ d < 1) turns out
to be the only case to study.

Lemma 2.7 (Translation principle for ordinal sums of numbers). Let 0 ⩽ d < 1 be a
dyadic rational, let w be any number, and let n be an integer. Then

{n + d | } : w = n + ({d | } : w).

Proof. Let {wL | wR } be the canonical form of w. We have

{n + d | } : {wL | wR }
88 | A. Carvalho et al.

{n + d, {n + d | } : wL 󵄨󵄨󵄨 {n + d | } : wR }
󵄨
=
{n + d, n + ({d | } : wL ) 󵄨󵄨󵄨 n + ({d | } : wR )}
⏟⏟⏟⏟⏟ 󵄨
=⏟⏟
induction

n + {d, {d | } : wL 󵄨󵄨󵄨 {d | } : wR }
⏟⏟⏟⏟⏟ 󵄨
=⏟⏟
translation principle

= n + ({d | } : {wL | wR }).

Lemma 2.8. Let d be a dyadic rational, 0 ⩽ d < 1, and let m be an integer. If m > 0, then

{d | } : m = {{d | } : m − 1 | }.

If m < 0, then

{d | } : m = {d | {d | } : (m + 1)}.

Proof. Let m be a positive integer. By definition

{d | } : m = {d, {d | } : 0, {d | } : 1, . . . , {d | } : (m − 1) | },
{d | } : (−m) = {d | {d | } : 0, {d | } : −1, . . . , {d | } : (−m + 1)}.

For any integer k, let G = {d | } : k − {d | } : (k − 1). We claim that G ⩾ 0. Suppose


k > 0. In G, Right can only play in −{d | } : (k − 1), and for any move he makes, Left has
the corresponding move in {d | } : k. This results in {d | } : i − {d | } : i = 0.
Suppose k ≤ 0. Now in G, Right has moves in both components, but, again, Left
has the corresponding move in the other component. This leaves a position equal to 0.
Thus {d | } : k − {d | } : (k − 1) ⩾ 0 for all m.
This result shows that

{d | } : m = {d, {d | } : (m − 1) | } if m > 0, and


{d | } : m = {d | {d | } : (m + 1)} if m < 0.

Finally, if m > 0, then d ⩽ {d | } : (m − 1). This follows since in {d | } : (m − 1) − d ⩾ 0,


Right can only move to {d | } : (m − 1) − d′ where −d′ > −d. Left responds to d − d′ > 0.
Thus, for m > 0, the canonical form of {d | } : m is {{d | } : (m − 1) | }.

Corollary 2.9. Let d be a dyadic rational, 0 ≤ d < 1, and let m be an integer. If m is


positive, then {d | } : m = ({d | } : m − 1) : 1. If m is negative, then {d | } : m = ({d | } :
(m + 1)) : −1.

Proof. If m > 0, then

({d | } : m − 1) : 1 = {{d | } : m − 1 | } = {d | } : m.
Ordinal sums, clockwise hackenbush, and domino shave | 89

If m < 0, then

({d | } : (m + 1)) : −1 = {d | {d | } : (m + 1)} = {d | } : m.

Theorem 2.10. Let d =2 0.d1 d2 . . . dk , and let m be an integer.


1. If m ⩾ 0, then {d | } : m = m + 1.
2. If m < 0, then {d | } : m =2 0.d1 d2 d3 . . . dj−1 1, where j is the index of the |m|th zero
digit of the binary expansion of d.

Proof. First, suppose m ≥ 0. We have {d | } : m = ({d | } : m − 1) : 1. Since {d | } : 0 = 1,


we have, by induction, ({d | } : m − 1) : 1 = m : 1. Finally, m : 1 = m + 1.
Now suppose m < 0.
If d = 0, then the theorem states {0 | } : m = 2m . This follows easily by induction
as follows. First, by Lemma 2.8, {0 | } : 0 = 1, and {0 | } : m = {0 | {0 | } : m + 1}. By
induction, {0 | } : m = {0 | 2m+1 }, and since, by Theorem 2.5, {0 | 2m+1 } = 2m , this part
of the result is proved.
We may now assume that d > 0.
If m = −1, then, by Lemma 2.8, {d | } : −1 = {d | {d | }} = {d | 1}. Now, by
Theorem 2.5, {d | 1} =2 0.d1 d2 d3 . . . dj−1 1, where j is the index of the first 0-bit of the
binary expansion of d.
If m < −1, then, by induction, {d | } : (m + 1) =2 0.d1 d2 d3 . . . dj−1 1, where j is the
index of the |m + 1|th zero digit of the binary expansion of d. Now, by Lemma 2.8,
{d | } : m = {d | {d | } : (m + 1)}. Again by Theorem 2.5, the binary expansion of
{d | {d | } : (m+1)} is obtained by replacing by “1” the first 0-bit in the binary expansion
of d after the position j (and the following digits are all zero). That bit is the |m|th zero
digit of the binary expansion of d, and this finishes the proof.

Observation 2.11. One consequence of Theorem 2.10 is that, for 0 ⩽ d < 1 and a posi-
tive integer m, { d | } : m = { d | } + m, that is, the ordinal sum coincides with the usual
sum.

Example 2.12.

309 󵄨󵄨 } : −3 =2 {0.100110101 | } : −3 =2 0.100111 = 39 .


󵄨󵄨
{
512 󵄨󵄨 64

As mentioned before, the signed binary notation is more useful for game practice
because of the correspondence of 1-“blue edge” and 1-“red edge”. The following theo-
rem, concerning the use of signed binary representations, is presented without proof,
since it is similar to the previous one.

Theorem 2.13. Let d be a dyadic rational such that 0 < d < 1, and let 1.1d2 . . . dk be its
signed binary expansion. Let m be a negative integer. The signed binary expansion of
{d | } : m is obtained in the following way.
90 | A. Carvalho et al.

Case 1. If the number of minus ones in the signed binary expansion of d is larger than
|m|, then the signed binary expansion of {d | } : m is 1.1d2 d3 . . . di−1 , where i is the index
of the (|m| + 1)th 1-bit in the signed binary expansion of d.

Case 2. If the number of minus ones in the signed binary expansion of d (n) is less than
or equal to |m|, then the signed binary expansion of {d | } : m is 1.1d2 d3 . . . dk 1 ⏟⏟⏟⏟⏟⏟⏟⏟⏟⏟⏟⏟⏟⏟⏟
1 1 . . . 1 1.

|m|−n 1 s

Example 2.14.

173 󵄨󵄨 } : −3 =2 {1.1 11111111 | } =2 1.1 1111 = 11 .


󵄨󵄨
{
512 󵄨󵄨 32

The last case that needs to be evaluated is where G = {d | } : (m + d′ ), m is an


integer, and d and d′ are dyadic rationals between 0 and 1.

Corollary 2.15. Let d be a dyadic rational such that 0 ⩽ d < 1.


1. If m is a positive integer, then {d | } : m = m + 1.
2. If m is a negative integer, then {0 | } : m = 2m .
3. If m is a negative integer and d =2 0.d1 d2 . . . dk , d ≠ 0, then {d | } : m =2 0.d1 d2 d3 . . .
dj−1 1, where j is the index of the |m|th zero digit of the binary expansion of d.

Proof. These are restatements, via Lemma 2.8, of Theorem 2.10 for part 1 and Theo-
rem 2.10 for parts 2 and 3.

Theorem 2.16 (Main result for numbers). Consider G = {n + d | } : (m + d′ ), where 0 ⩽


d, d′ < 1 are dyadics, and n, m ∈ ℤ. Let 2kj be the simplest form of {d | } : m. Then

k d′
G =n+ + .
2j 2j

Proof. By Lemma 2.7, {n + d | } : (m + d′ ) = n + ({d | } : (m + d′ )), so we only need to


analyze G′ = {d | } : (m + d′ ) using the fact that G = n + G′ .

Case 1. d = 0 and m ⩾ 0.

We have that G′ is {0 | } : (m + d′ ), and {0 | } is the canonical form of 1. Therefore, by


Theorem 1.3, G′ = {0 | } : (m + d′ ) = 1 + m + d′ . By Corollary 2.15, {d | } : m = m + 1 = m+1
20
,
m+1 d′
G′ = 20
+ 20
, and the theorem holds.

Case 2. d = 0 and m < 0.

We have that G′ is {0 | } : (m + d′ ) and {0 | } is the canonical form of 1. Therefore, by


Theorem 1.3,

1 1 d′
G′ = {0 | } : (m + d′ ) = 1 + (1 − 2|m|
+ d ′
) = + .
2|m| 2|m| 2|m|
Ordinal sums, clockwise hackenbush, and domino shave | 91

1 1 d′
By Corollary 2.15, {d | } : m = 2|m|
, G′ = 2|m|
+ 2|m|
, and the theorem holds.

Case 3. d > 0 and m ⩾ 0.


m+1
By Corollary 2.15, part 1, G′ = m + d′ + 1. By Corollary 2.15, part 2, {d | } : m = 20
,
m+1 d′
G =

20
+ 20
, and the theorem holds.

Case 4. d > 0 and m < 0.

This is the hardest case. To prove it, we will construct a blue-red hackenbush
string H whose value is 2kj + d2j (Part 1). We will then prove that G′ − H is a 𝒫 -position

(Part 2).

(Part 1) Let 1.11 . . . 111 . . . 11 . . . 11 . . . 1 be the signed binary expansion of d. Since


⏟⏟⏟⏟⏟⏟⏟⏟⏟⏟⏟⏟⏟⏟⏟⏟⏟⏟⏟⏟⏟⏟⏟⏟⏟⏟⏟⏟⏟⏟⏟⏟⏟⏟⏟⏟⏟⏟⏟⏟⏟⏟⏟⏟⏟⏟⏟

q1s
0 < d < 1, the first digit after the binary point is 1. Also, assume that this expansion
has q 1’s.
Let 1.1d2′ d3′ . . . dw

be the signed binary expansion of d′ . Since 0 < d′ < 1, the first
digit after the binary point is 1.
Consider the hardest case |m| > q. By Theorem 2.13 we know that

1.11 . . . 111 . . . 11 . . . 11 . . . 1 1 1⏟⏟⏟⏟⏟⏟⏟⏟⏟⏟⏟⏟⏟⏟⏟


⏟⏟⏟⏟⏟⏟⏟⏟⏟⏟⏟⏟⏟⏟⏟⏟⏟⏟⏟⏟⏟⏟⏟⏟⏟⏟⏟⏟⏟⏟⏟⏟⏟⏟⏟⏟⏟⏟⏟⏟⏟⏟⏟⏟⏟⏟⏟ 1...11
′ ′
d (q 1 s) |m|−q 1 s

is the signed binary expansion of the game value of {d | } : m. The hypothesis of the
current theorem states that this is 2kj . Hence there are j binary places.
Now the game value of the following blue-red hackenbush string H is 2kj + d′
2j
.
That happens because the added rightmost part d′ is shifted by j binary places.

k d′
1.11 . . . 111 . . . 11 . . . 11 . . . 1 1 ⏟⏟⏟⏟⏟⏟⏟⏟⏟⏟⏟⏟⏟⏟⏟
⏟⏟⏟⏟⏟⏟⏟⏟⏟⏟⏟⏟⏟⏟⏟⏟⏟⏟⏟⏟⏟⏟⏟⏟⏟⏟⏟⏟⏟⏟⏟⏟⏟⏟⏟⏟⏟⏟⏟⏟⏟⏟⏟⏟⏟⏟⏟ 1 1 . . . 1 1 11d 2 d3 . . . dw = j + j .
′ ′ ′
⏟⏟⏟⏟⏟⏟⏟⏟⏟⏟⏟⏟⏟⏟⏟⏟⏟⏟⏟⏟⏟⏟⏟
′ ′ 2 2
d (q 1 s) |m|−q 1 s
⏟⏟⏟⏟⏟⏟⏟⏟⏟⏟⏟⏟⏟⏟⏟⏟⏟⏟⏟⏟⏟⏟⏟⏟⏟⏟⏟⏟⏟⏟⏟⏟⏟⏟⏟⏟⏟⏟⏟⏟⏟⏟⏟⏟⏟⏟⏟⏟⏟⏟⏟⏟⏟⏟⏟⏟⏟⏟⏟⏟⏟⏟⏟⏟⏟⏟⏟ digits of d′
{d| }:m= kj (|m| 1 s)

2

(Part 2) To finish the proof, we have to show that G′ − H is a 𝒫 -position. By Theo-


rem 1.2 we can use the following game form of G′ , which also uses blue-red hacken-
bush strings. The subordinate is a blue-red hackenbush string whose value is m+d′ .

G′ = {1.11 . . . 111 . . . 11 . . . 11 . . . 1 | } : ⏟⏟⏟⏟⏟⏟⏟⏟⏟


⏟⏟⏟⏟⏟⏟⏟⏟⏟⏟⏟⏟⏟⏟⏟⏟⏟⏟⏟⏟⏟⏟⏟⏟⏟⏟⏟⏟⏟⏟⏟⏟⏟⏟⏟⏟⏟⏟⏟⏟⏟⏟⏟⏟⏟⏟⏟ 1 . . . 1 11d 2 d3 . . . dw .
′ ′ ′
⏟⏟⏟⏟⏟⏟⏟⏟⏟⏟⏟⏟⏟⏟⏟⏟⏟⏟⏟⏟⏟⏟⏟
′ ′
d (q 1 s) (|m| 1 s) digits of d′

Let us verify that G′ − H = 0, that is, let us check that

. . . 111 . . . 11 . . . 11 . . . 1 | } : ⏟⏟⏟⏟⏟⏟⏟⏟⏟
⏟⏟⏟⏟⏟⏟⏟⏟⏟⏟⏟⏟⏟⏟⏟⏟⏟⏟⏟⏟⏟⏟⏟⏟⏟⏟⏟⏟⏟⏟⏟⏟⏟⏟⏟⏟⏟⏟⏟⏟⏟⏟⏟⏟⏟⏟⏟
{1.11 1 . . . 1 11d 2 d3 . . . dw
′ ′ ′
⏟⏟⏟⏟⏟⏟⏟⏟⏟⏟⏟⏟⏟⏟⏟⏟⏟⏟⏟⏟⏟⏟⏟
′ ′
d (q 1 s) (|m| 1 s) digits of d′

+
92 | A. Carvalho et al.

1.11 . . . 111 . . . 11 . . . 11 . . . 1 1 ⏟⏟⏟⏟⏟⏟⏟⏟⏟⏟⏟⏟⏟⏟⏟


⏟⏟⏟⏟⏟⏟⏟⏟⏟⏟⏟⏟⏟⏟⏟⏟⏟⏟⏟⏟⏟⏟⏟⏟⏟⏟⏟⏟⏟⏟⏟⏟⏟⏟⏟⏟⏟⏟⏟⏟⏟⏟⏟⏟⏟⏟⏟ 1 1 . . . 1 1 11d 2 d3 . . . dw
′ ′ ′
⏟⏟⏟⏟⏟⏟⏟⏟⏟⏟⏟⏟⏟⏟⏟⏟⏟⏟⏟⏟⏟⏟⏟⏟⏟
1 ′ s) 1 ′s
⏟⏟⏟⏟⏟⏟⏟⏟⏟⏟⏟⏟⏟⏟⏟⏟⏟⏟⏟⏟⏟⏟⏟⏟⏟⏟⏟⏟⏟⏟⏟⏟⏟⏟⏟⏟⏟⏟⏟⏟⏟⏟⏟⏟⏟⏟⏟⏟⏟⏟⏟⏟⏟⏟⏟⏟⏟⏟⏟⏟⏟⏟⏟⏟⏟⏟⏟ digits of −d′
−d (q |m|−q
{|−d}:(−m) (|m| 1′ s)

is a 𝒫 -position.
First, there is a correspondence between the moves in the digits of d′ and −d′ .
Also, there is a correspondence between Right moves in the |m| 1s of the subordinate
of the upper component and Left moves in the ones of {| −d} : (−m) in the bottom
component. Regarding those correspondences, there is a Tweedledee–Tweedledum
strategy.
Second, if Left moves to 1.11 . . . 111 . . . 11 . . . 11 . . . 1 = d in the upper component (en-
tering the base), then Right answers by removing the 1 immediately after the digits of
−d in the bottom component, and vice-versa.
Third, if Right removes any 1 of the digits of −d in the bottom component, then
Left answers with 1.11 . . . 111 . . . 11 . . . 11 . . . 1 (entering the base) in the upper component,
and wins.
Since the second player wins, G′ − H ∈ 𝒫 , and G′ = H = 2kj + d2j .

Observation 2.17. Essentially, if m + d′ ⩾ 0, then the ordinal sum {d | } : (m + d′ ) is the


sum {d | } + m + d′ ; if, instead, m + d′ < 0, then Corollary 2.15 is needed.

2.3 Determination of the game value of a clockwise blue-red


hackenbush position
Consider again the clockwise blue-red hackenbush position exhibited in Figure 5.1.
To compute its game value, let us compute first the game value of the subposition
presented in Figure 5.6.

Figure 5.6: A relevant subposition.

We have to determine the value of − 21 : ({ | 1} : 1). To compute { | 1} : 1, we need


the position to be in the correct form to apply Theorem 2.16. Hence we instead use
{−1 | } : −1 and will negate the resulting value.
Ordinal sums, clockwise hackenbush, and domino shave | 93

Consider ({−1 | } : −1). By Theorem 2.16, since n = −1, d = 0, m = −1, d′ = 0, and


{d | } : m = 21 , we have {−1 | } : −1 = −1 + 21 = − 21 . Hence { | 1} : 1 = 21 .
Finally, using van Roode’s evaluation, − 21 : ({ | 1} : 1) = − 21 : 21 = − 83 .
Regarding the clockwise blue-red hackenbush position exhibited in Figure 5.1,
we have the situation presented in Figure 5.7.

Figure 5.7: Figure 5.1 revisited.

We have to determine the value of 21 : ({− 83 | } : −1). We start with {− 83 | } : −1. To


apply Theorem 2.16, to find the value of {− 83 | } : −1, we first rewrite the expression as
{−1 + ( 85 ) | } : −1. We observe that n = −1, d = 85 , m = −1, and d′ = 0. By Theorem 2.10

3
{d | } : m =2 {0.101 | } : −1 =2 0.11 = .
4

Now using Theorem 2.16, we have {− 83 | } : −1 = −1 + 43 = − 41 .


Using again van Roode’s evaluation, we have 21 : ({− 83 | } : −1) = 1
2
: − 41 = 7
16
. This
is the game value of the proposed position.

Exercise. Verify that the game value of the clockwise blue-red hackenbush posi-
tion exhibited in Figure 5.2 is given by

1
{−2 | } : (−1 : ({−1 | } : −1)) = −1 .
4

3 domino shave
We first find a normalized version of domino shave and then show that this is equiva-
lent to clockwise hackenbush by giving a bijection between the positions. We then
note which selection of dominoes gives rise to games already in the literature. As well,
we show that Hetyei’s Bernoulli game is a subset of stirling shave.
94 | A. Carvalho et al.

3.1 Normalized domino shave


Let D be a domino shave position d1 , d2 , . . . , dk . We normalize the string using the fol-
lowing algorithm. Part of the algorithm involves assigning new colors. So in Step 2,
with s = 1, we consider the whole line, but with s > 1, there will be colors other than
red, blue, and green.
1. Set s = 1 and p = 1.
2. In the right-most consecutive line that contains no aqua, pink or emerald domi-
noes, let Es be the set of indices of the dominoes that can be played.
Consider the dominoes with indices in Es . Starting at the left (least index) domino:
– if it is blue, then replace it by (p, p + 1), colored aqua;
– if it is red, then replace it by (p + 1, p), colored pink;
– if it is green, then replace it by (p, p), colored emerald.
Repeat with the blue, red, or green domino of least index in Es . When all dominoes
in Es have been replaced, go to Step 3.
3. Set s := s + 1 and p := p + 2. If there are any blue, red, or green dominoes, repeat
Step 2. If not, then recolor the aqua dominoes blue, the pink dominoes red, and
the emerald dominoes green, and stop.

Example 3.1. Let G = (2, 4)(7, 3)(1, 2)(4, 4)(3, 2). The steps of the algorithm are shown
in Table 5.1, where a change of color is indicated by [a, b].

Table 5.1: Conversion to Normalized domino shave.

(s, p) Old Line Dominoes indexed in Es New Line

(1, 1) (2, 4)(7, 3)(1, 2)(4, 4)(3, 2) (1, 2)(3, 2) (2, 4)(7, 3)[1, 2](4, 4)[2, 1]
(2, 3) (2, 4)(7, 3)[1, 2](4, 4)[2, 1] (4, 4) (2, 4)(7, 3)[1, 2][3, 3][2, 1]
(3, 5) (2, 4)(7, 3)[1, 2][3, 3][2, 1] (2, 4)(7, 3) [5, 6][6, 5][1, 2][3, 3][2, 1]
(4, 7) [5, 6][6, 5][1, 2][3, 3][2, 1] (5, 6)(6, 5)(1, 2)(3, 3)(2, 1)

The partition of the indices into E1 , E2 , . . . is independent of the normalization. It does


point to a very important result.

Lemma 3.2. Let f be the largest index in Ea , a > 1. The domino df +1 prevents every
domino in Ea from being played.

Proof. Let g be the smallest index of the dominoes in Ea . This gives Ea = {g, g +1, . . . , f }.
After df +1 has been played, then every domino di , g ⩽ i ⩽ f , is playable. Thus

min{lf , rf } ⩾ min{lf −1 , rf −1 } ⩾ ⋅ ⋅ ⋅ ⩾ min{lg , rg }.

Since g ∈ Ea , there exists dj , j > g, j ∈ Ea−1 , that prevents dg from being played. (If
no such domino exists, then g ∈ Ea−1 .) We may assume that j is the least index. Thus
Ordinal sums, clockwise hackenbush, and domino shave | 95

min{lg , rg } > min{lj , rj }. Since f + 1, j ∈ Ea−1 and f + 1 ⩽ j, then dj does not prevent df +1
being played. This gives min{lj , rj } > min{lf +1 , rf +1 }. Combining the inequalities yields
min{li , ri } > min{lf +1 , rf +1 } for g ⩽ i ⩽ f , that is, df +1 prevents all of Ea being played.

The properties of the normalization algorithm that we require follow immediately


from the algorithm steps.

Lemma 3.3. Let D = (d1 , d2 , . . . , dk ) be a domino shave position, and let D′ =


(d1′ , d2′ , . . . , dk′ ) be the normalized position.
1. The indices of the dominoes of D are partitioned into subsets E1 , E2 , . . . , Ef .
2. If i ∈ Ea , j ∈ Eb , and a < b, then the left and right spots of di′ are smaller than the
left and right spots of dj′ .
3. Let i < j, i ∈ Ea and j ∈ Eb . If a < b, then dj′ does not prevent di′ being played. If
a > b, then dj′ does prevent di′ being played.

Lemma 3.4. If D is a domino shave position and D′ is its normalized version, then
D = D′ .

Proof. Let {di : i = 1, 2, . . . , k} be the dominoes in D, and let {di′ : i = 1, 2, . . . , k} be the


dominoes in D′ . We show that D − D′ = 0.
The strategy will be the usual mimic strategy: if the first player plays the domino
with index i in one of the two strings, then the second player plays the other domino
of index i. To prove this, we need to show that at every stage of the game, di is playable
if and only if di′ is playable.
On the first move, the only dominoes playable in D are those di , i ∈ E1 . By
Lemma 3.3(3) the dominoes di′ , i ∈ Ea , a > 1, are not playable.
Now consider the dominoes di′ , dj′ , i, j ∈ E1 , i < j. If there is a green domino dc′ ,
c ∈ E1 , i < c ⩽ j, then both spots of dj′ are greater than those of di′ . If there is no such
domino, then the spots of di′ and dj′ are p and p + 1 for some p. The order depends on
the domino color. Consequently, dj′ does not prevent the playing of di′ . Therefore, for
i ∈ E1 , both di and di′ are playable.
Now suppose di , i ∈ Ea , a > 1, is playable. This is only possible if dj , j > i, j ∈ Eb ,
b < a, have been played or eliminated. By the mimic strategy played so far, it is also
true that dj′ , j > i, j ∈ Eb , b < a, have been played or eliminated. By Lemma 3.3(3) the
dominoes di′ , i ∈ Eb , b > a, are not playable. By Lemma 3.3(2) the dominoes dj′ , i < j,
i, j ∈ Eb , do not prevent di′ from being played. Therefore di′ is playable.
Suppose di′ , i ∈ Ea , is playable. Again, the dominoes dj′ , j > i, j ∈ Eb , b < a, have
been played or eliminated. By the mimic strategy played so far, it is also true that dj ,
j > i, j ∈ Eb , b < a, have been played or eliminated. However, by the normalization al-
gorithm, once the dominoes dj , j ∈ ∪a−1 f =1 , are gone, then every domino and, specifically,
di , i < j, with index in Ea is playable.
This shows that the mimic strategy is possible, and therefore D − D′ is a second
player win.
96 | A. Carvalho et al.

3.2 domino shave is clockwise hackenbush


The proof of the equivalence between domino shave and clockwise hackenbush is
similar to that of domino shave and normalized domino shave.

Theorem 3.5. There is a bijection f between domino shave and clockwise hacken-
bush positions such that D − f (D) = 0.

Proof. Let D = (d1 , d2 , . . . , dk ) be a normalized domino shave position. Let Ea be the


index set of the last line of dominoes replaced in the normalization algorithm. The
proof follows by induction on a.
Suppose a = 1. Let T = (e1 , e2 , . . . , ek ) be a string where ei is the same color as di .
Every domino in D is playable and remains playable until it is eliminated. Similarly, T
is a trunk, so every edge is playable and remains playable until it is removed.
Suppose a > 1. Consider D′ = D \ {di : i ∈ Ea }. Now D′ is a normalized domino
shave position and by induction there exists a unique clockwise hackenbush T ′
with f (D′ ) = T ′ . Also, D′′ = (di : i ∈ Ea ) is equivalent to a string T ′′ . Let j be the greatest
index in Ea , and let ej+1 be the edge of T ′ that corresponds to dj+1 . Create a new tree T
by identifying the bottom vertex of T ′′ and the bottom vertex of ej+1 . Place T ′′ to the left
of the edge ej+1 . Set f (D) = T. Note that every edge of T is associated with a domino,
specifically, di ↔ ei .

Claim. D − T = 0.

Proof of Claim. This follows in a similar fashion the previous equivalence result. The
mimic strategy is to play the corresponding other object of the same index.
If a = 1, then all edges and dominoes are playable and remain playable until elim-
inated.
If i ∈ ̸ Ea , then both the dominoes in D′′ and the edges of T ′′ do not prevent di and
ei from being played.
Suppose i ∈ Ea . If di is playable, then dj+1 has been eliminated. Therefore in T,
ej+1 has also been eliminated. The string T ′′ is now part of the trunk, and every edge,
including ei is playable. If ei is playable, then it is on the trunk, and ej+1 has been
eliminated. Therefore dj+1 has been eliminated, and every domino in D′′ , including di
is playable.
This proves the claim and the equivalence.
From a clockwise hackenbush position it is possible to get the normalized
domino shave position by realizing that the first trunk corresponds to the domi-
noes in E1 and the next strings to the left, in order, correspond to the dominoes of
E2 , E3 , . . . , En . The normalization algorithm then gives a set of dominoes.
Ordinal sums, clockwise hackenbush, and domino shave | 97

3.3 Relationship with other games


Versions of domino shave include, as particular cases, several other rulesets, each of
which has been shown to have interesting or intriguing properties.
1. If all the dominoes are (1, 1), then the clockwise hackenbush version is a sin-
gle string of green edges. This is nim, which is the foundation of all impartial
games [5].
2. If all the pieces are of the form (a, a), then this is stirling shave [8]. An ex-
plicit formula for evaluating the ordinal sums of nimbers is developed to give
the values of the positions. If the dominoes are a permutation of the dominoes
(1, 1), (2, 2), . . . , (n, n), then the number of 𝒫 -positions of length n is given in terms
of the Stirling numbers of the second kind.
3. In Hetyei’s Bernoulli game [9], the domino di is restricted to having both spots
between 1 and i. Only the right spot is used to determine when a domino can be
removed, and thus it is an impartial game. The number of 𝒫 -positions with n domi-
noes is given in terms of the Bernoulli numbers of the second kind. The game can
be shown to be equivalent to stirling shave via the following. A domino is un-
playable if it can never be the first of the string to be removed. A blue domino is un-
playable since the right stop is greater than the left. If the dominoes di+1 , di+2 , . . . , dj
are unplayable and di is prevented from being played by df , i + 1 ⩽ f ⩽ j, then di
is unplayable. Removing all unplayable vertices does not affect the options of all
followers in the game. Remaining are a subset of the red and green dominoes all of
which have their right spots no larger than their left spots. If di is prevented from
being played by dj , then, in particular, ri > rj . Thus when each domino (l, r) is re-
placed by (r, r), the same dominoes can be played, and the dominoes that prevent
a domino from being played are the same in both games.
4. If all the pieces are (1, 2) and (2, 1), then it is equivalent to blue-red hackenbush
strings.
5. If all the pieces are (1, 2), (2, 1), and (1, 1), then this is blue-red-green hacken-
bush strings. The value can be given via the ordinal sums of numbers and nim-
bers. It is a well-known open problem to give an explicit formula. It seems clear
that the values of the strings are unique, but we do not know of a proof.

Bibliography
[1] M. H. Albert, R. J. Nowakowski, and D. Wolfe, Lessons in Play: An Introduction to Combinatorial
Game Theory, A. K. Peters, 2007.
[2] E. R. Berlekamp, The Hackenbush number system for compression of numerical data, Inform.
and Control 26 (1974), 134–140.
[3] E. R. Berlekamp, Yellow-Brown Hackenbush, in Games of No Chance 3, pp. 413–418, Cambridge
Univ. Press, 2009.
98 | A. Carvalho et al.

[4] E. R. Berlekamp, J. H. Conway, and R. K. Guy, Winning Ways for Your Mathematical Plays,
Academic Press, London, 1982.
[5] C. L. Bouton, Nim, a game with a complete mathematical theory, Annals of Mathematics 3
(1902), 35–39.
[6] J. H. Conway, On Numbers and Games, Academic Press, 1976.
[7] A. Fink, R. J. Nowakowski, A. N. Siegel, and D. Wolfe, Toppling Conjectures, in Games of No
Chance 4, pp. 65–76, Cambridge University Press, 2015.
[8] M. Fisher, R. J. Nowakowski, and C. Santos, Sterling Stirling play, Internat. J. of Game Theory
47(2) (2018), 557–576.
[9] G. Hetyei, Enumeration by kernel positions, Adv. in Appl. Math. 42 (2009), 445–470.
[10] G. Hetyei, Enumeration by kernel positions for strongly Bernoulli type truncation games on
words, J. Combin. Theory Ser. A 117 (2010), 1107–1126.
[11] N. A. McKay, Forms and Values of Number-like and Nimber-like Games, PhD Thesis, Dalhousie
University, 2016.
[12] N. A. McKay, R. Milley, and R. J. Nowakowski, Misère-play Hackenbush Sprigs, Internat. J. Game
Theory 45 (2016), 731–742.
[13] A. N. Siegel, Combinatorial Game Theory, American Math. Soc., Providence, 2013.
[14] T. van Roode, Partizan Forms of Hackenbush Combinatorial Games, M. Sc. Thesis, University of
Calgary, 2002.
Alexander Clow and Stephen Finbow
Advances in finding ideal play on poset games
Abstract: Poset games are a class of combinatorial games that remain unsolved. Soltys
and Wilson proved that computing winning strategies is in PSPACE and aside from
particular cases such as nim and N-Free games, P time algorithms for finding ideal
play are unknown. In this paper, we present methods to calculate the nimber of
poset games allowing for the classification of winning or losing positions. The results
present an equivalence of ideal strategies on posets that are seemingly unrelated.

1 Introduction
Poset games are impartial combinatorial games whose game boards are partially or-
dered sets (posets) P on which players take turns removing an element p ∈ P and every
element p′ ≥P p from P. We define P − p≤ = P \ {p′ : p ≤ p′ }. Each turn a player must
remove an element if they can. An example of some moves in a poset game is given in
Figure 6.1.
In this paper, we consider normal play games, where the first player having no
move loses. Poset games include Nim, Chomp, Subset Take-Away, Divisors, and

Figure 6.1: Examples of moves in poset games.

Acknowledgement: Research of S. Finbow was funded by Natural Sciences and Engineering Research
Council of Canada grant numbers 2014-06571.
The authors would like to thank Dr. Darien DeWolfe and Dr. Richard Nowakowski for sharing their in-
sights and thoughts on various aspects of the paper.

Alexander Clow, Stephen Finbow, Department of Mathematics and Statistics, St. Francis Xavier
University, Antigonish, Nova Scotia, Canada, e-mails: x2018ytd@stfx.ca, sfinbow@stfx.ca

https://doi.org/10.1515/9783110755411-006
100 | A. Clow and S. Finbow

Green Hackenbush on trees. As an example, Figure 6.2 shows a game of chomp and
the equivalent poset game.

Figure 6.2: A chomp position described as a poset game.

Nim, central to the theory of all impartial games, is a poset game played on any number
of disjoint totally ordered sets of finite or transfinite cardinality called piles (see Fig-
ure 6.3) and as a result serves as one of the simplest poset games. It is well known that
a game of Nim is in 𝒫 (the set of previous player win games, also called 𝒫 -positions)
if and only if the binary XOR sum of the pile heights is 0 [3].

A1 A2 A3 B1 B2

Figure 6.3: Examples of moves in Nim.

Aside from Nim, very little is known about ideal play on general poset games. Soltys
and Wilson [7] proved that computing a winning strategy is in PSPACE, and aside from
particular cases such as Nim and N-Free games [4], which are not the focus of this pa-
per, P time algorithms for finding ideal play are unknown. Byrnes [2] also proved non-
constructively that local periodicity exists in Chomp and poset games that resemble
Chomp. Attempts at constructive results have thus far been largely unsuccessful [8].
Computational efforts, like those of Zeilburger [9], demonstrate that this local period-
icity leads to no discernible global pattern even in cases as small as 3 by n Chomp.
Partisan games are games where players may have different sets of moves. We
can describe a game G as the set of its left and right options (or moves). This is often
denoted G = {Gℒ | Gℛ }, where Gℒ and Gℛ are the sets of left and right options, re-
spectively. Impartial games are exactly those games where the left and right options
are the same for G and all followers of G. Two impartial games G and H are equal,
G = H, if G + H is a 𝒫 -position. To describe this another way, playing on the games G
and H where each turn a player chooses to move on G or on H is a 𝒫 -position. Sprague
Advances in finding ideal play on poset games | 101

and Grundy independently proved the following result, which is vital to the study of
impartial games in normal play.

The Sprague–Grundy Theorem ([1]). For all impartial combinatorial games G, there ex-
ists a Nim pile that is equivalent to G.

The game with a Nim pile of size n is denoted as ∗n, and G = ∗n denotes that
the game G is equivalent to a Nim pile of size n. The 𝒢 -value (Grundy value or Grundy
number) of an impartial game G is exactly the n such that G = ∗n. Equivalently, we
write 𝒢 (G) = n. We define the option value set of an impartial game G, which we denote
G∗ , to be the set of 𝒢 -values of all the options of G. An important tool in determining
𝒢 -values is the mex-rule (minimal excluded value rule). Formally, the mex of a set of
nonnegative integers is the smallest nonnegative integer not in the set. For example,
mex{0, 2} = 1. From Sprague and Grundy’s theory [1], 𝒢 (G) = mex(G∗ ) for all impartial
games G. We will write G ≡ H if G∗ = H ∗ .
For impartial games, the canonical form of a value ∗n is exactly the Nim pile of
size n (or any other game with the same game tree). We will call G weakly canonical
if G ≡ ∗n, which is equivalent to |G∗ | = 𝒢 (G). An example of a game that is weakly
canonical is G = {{∗ | ∗}, ∗2 + ∗3, ∗||{∗ ∗}, ∗2 + ∗3, ∗} ≡ ∗2. An example of a game that
is not weakly canonical is H = {0, ∗2 | 0, ∗2}.
A fence F is a poset F = {f0 , f1 , f2 , . . . , fn } such that f0 > f1 , f1 < f2 , f2 > f3 , f3 <
f4 , . . . , fn−1 < fn or f0 < f1 , f1 > f2 , f2 < f3 , f3 > f4 , . . . , fn−1 > fn if n is even and f0 > f1 ,
f1 < f2 , f2 > f3 , f3 < f4 , . . . , fn−1 > fn or f0 < f1 , f1 > f2 , f2 < f3 , f3 > f4 , . . . , fn−1 < fn if n is
odd and such that these are all comparabilities between the points. The points f0 and
fn are the endpoints of the fence. A poset P is connected if and only if for all p1 , p2 ∈ P,
there is a fence F ⊂ P with endpoints p1 , p2 . A poset that is not connected is called
disconnected [6].
In this paper, we provide insights into how to play on connected poset games. We
do so by factoring/partitioning a connected poset A into subposets that give mean-
ingful information about 𝒢 (A) and the subgames of A. The work is related to playing
games on the ordinal sum of posets (which is exactly the ordinal sum of poset games)
but generalizes this idea. Let A and B be posets with disjoint underlying sets. Then the
ordinal sum A : B is the poset on A ∪ B with x ≤ y if either x, y ∈ A and x ≤A y, or
x, y ∈ B and x ≤B y, or x ∈ A and y ∈ B. In other words, any move in A eliminates all
possible moves in B for the remainder for the game. This concept has been generalized
to impartial games. The following result is due to Fisher, Nowakowski and Santos [5],
where mex(S, 0) = mex(S), and for k ≥ 1, mex(S, k) = mex(S′ ) when

S′ = S ∪ {mex(S, 0), mex(S, 1), . . . , mex(S, k − 1)}.

Theorem 1 ([5]). Let G and H be impartial games. Then 𝒢 (G : H) = mex(G∗ , 𝒢 (H)).


102 | A. Clow and S. Finbow

In Section 2, we define a class of mapping between posets, which preserves the


underlying order of the poset in a useful way for studying poset games. This map is
used to partition (or factor) the poset and establish a relationship between this par-
tition and the nimbers in Section 3. Section 4 provides examples of applications of
the results, and Section 5 provides some concluding remarks and directions for future
research.

2 Order compressing map


We say that a function f : P → Q such that P and Q are partially ordered sets is order
compressing if for all x, y ∈ P, f (x) = f (y) = q ∈ Q if and only if for every z ∈ P:
– if z <P x and z <P y, then f (z) ≤Q q;
– if z ≮P x and z ≮P y, then f (z) ≮Q q; and
z <P x and z ≮P y;
– if { }, then f (z) = q.
z ≮P x and z <P y

Clearly, each order compressing function is a homomorphism (see Figure 6.4); how-
ever, every order compressing function is also order reflecting. The converse of this
statement is false. Figure 6.5 gives an example of a homomorphism that is not order
compressing.

f1

Figure 6.4: An order compressing homomorphism.

x y h(x) = h(y)
h

w z h(z) = h(w)

Figure 6.5: A map which is a homomorphism but not order compressing.


Advances in finding ideal play on poset games | 103

When f : P → Q is order compressing, an f -factor of P is a subposet of P defined by


f −1 (x) for some x ∈ Q. The set of f -factors is a Q-factorization of P. When the choice of
f is clear from the context, we call an f -factor of P a factor of P and Q-factorization of
P a factorization of P.

3 Equivalencies of games
In this section, we establish two results allowing for the reduction of poset games to
simpler poset games using order compressions. The first (Theorem 3) is a generaliza-
tion of the colon principle originally given in [3]. The second (Theorem 4) is a result
that deals with the interchangeability of particular classes of subposets that are option
equivalent.

Theorem 2. Let A and B be posets, and let f : A → Q and g : B → Q be order compress-


ing maps. If x is maximal in A, f (x) = α, and for all β ∈ Q with β ≠ α ∈ Q, f −1 (β) ≅ g −1 (β),
then f −1 (α) = g −1 (α) implies A = B.

Proof. Assume that f −1 (α) = g −1 (α) and consider A + B. It is sufficient to show that
o(A + B) = 𝒫 . Without loss of generality, for all moves on A not on f −1 (α), the second
player will mirror the move in B. If a player moves on f −1 (α), g −1 (α), then respond as if
you were playing f −1 (α) + g −1 (α). By the maximality of x any such move will not effect
any y ∈ f −1 (β) or z ∈ g −1 (β). By induction, all of the above countermoves are winning
moves. Thus o(A + B) = 𝒫 .

Theorem 3. Let A and B be posets, and let f : A → Q and g : B → Q be order compress-


ing maps. If x is maximal in A, f (x) = α, and for all β ∈ Q with β ≠ α ∈ Q, f −1 (β) ≅ g −1 (β),
then f −1 (α) = g −1 (α) if and only if A = B

Proof. By Theorem 2 we only need to show that if A = B, then f −1 (α) = g −1 (α). Assume
that A = B and consider f −1 (α) + g −1 (α). It suffices to show that o(f −1 (α) + g −1 (α)) = 𝒫 .
Assume for contradiction that o(f −1 (α) + g −1 (α)) = 𝒩 . Then there exists an element z
on f −1 (α) or g −1 (α) such that, without loss of generality, o((f −1 (α) − z≤ ) + g −1 (α)) = 𝒫 .
By Theorem 2 this implies that playing z on A + B is a winning move, which contra-
dicts our assumption that A = B. Hence no such z exists, and, as a result, o(f −1 (α) +
g −1 (α)) = 𝒫 .

An example of an application of Theorem 3 is given in Figure 6.6, which depicts


two Green Hackenbush positions (depicted as poset games). Theorem 3 implies that
both of these positions have the same 𝒢 -value.
Figure 6.7 gives an example of a poset where the assumptions of Theorem 3 are
satisfied except for the maximality of x in A. In this example, A = ∗2 and B = ∗, and
hence the assumption that x is maximal may not be relaxed in Theorem 3. Figure 6.8
104 | A. Clow and S. Finbow

Figure 6.6: An example of Theorem 3 applied to a game of Green Hackenbush.

A Q B

f g

Figure 6.7: An example of why Theorem 3 requires x to be maximal.

Figure 6.8: A 𝒫-position given by Theorem 3.

gives an example of a nontrivial poset. The following corollary implies that this poset
is a previous player win position.

Corollary 1. Let f : A → Q be an order compressing map. If 𝒢 (f −1 (β)) = 0 for all β ∈ Q,


then A = 0.

Proof. Let A1 = A \ f −1 (α), where α is maximal in Q. By definition 0 = 0, so A1 = A by


Theorem 3. We may now repeat this process on any maximal element of A1 . It follows
by induction that A = 0.
Advances in finding ideal play on poset games | 105

Theorem 4. Let A and B be posets, and let f : A → Q and g : B → Q be order compress-


ing maps such that f −1 (β) ≡ g −1 (β) for all β ∈ Q. Then A ≡ B.

Proof. Let α1 , α2 , . . . , αd be the set of elements in Q such that f −1 (αi ) ≇ g −1 (αi ). Let Ai
be the poset such that hi : Ai → Q is an order compression and for all j ≤ i, h−1 i (αj ) ≅
g (αj ), whereas for all k > i, hi (αj ) ≅ f (αj ). Then we say that A ≅ A0 and B ≅ Ad .
−1 −1 −1

Consider Ai and Ai+1 . If |Q| = 1, then the statement is trivial. Otherwise, without
lose of generality, for all moves on h−1 i (αi+1 ) in A, there exists a move on hi+1 (αi+1 ) in
−1

B such that the two resulting games are equal by our assumption that f (β) ≡ g −1 (β)
−1

and Theorem 3. If either player moves anywhere else on Ai or Ai+1 , the resulting games
are equal by induction as Ai ≡ Ai+1 implies Ai = Ai+1 . Thus Ai ≡ Ai+1 . Hence, by the
transitivity of ≡, A0 ≡ Ad .

4 Applications
In this section, we provide two examples of applications of the results of the previous
section. Consider the posets drawn in Figure 6.9. We claim that these all have the same
nimber.

≡ =

Figure 6.9: Three posets with nimber 3, given by a particular case of Theorem 3 (Colon Principle) and
Theorem 4.

To establish the first equivalence shown in Figure 6.9, consider the subposets consist-
ing of red elements. The option value set of these subposets is {0, 2} and hence the
equivalence follows from Theorem 4. The second equivalence in Figure 6.9 follows
from applying Theorem 3 as we claim the subposets consisting of blue elements have
nimber 1. To demonstrate this, the subposet of blue elements is redrawn and recolored
in Figure 6.10. Observe that the subset of green elements in Figure 6.10 has nimber 0
and contains a maximal element of the poset. Theorem 3 now implies the first equiv-
alence in Figure 6.10. The second equivalence in Figure 6.10 can be found by applying
Theorem 1.
106 | A. Clow and S. Finbow

= =

Figure 6.10: Equality of the subposet of blue elements from Figure 6.9 by Theorem 1 and Theorem 3.

Theorem 4 points to the importance of determining the set of posets with a given op-
tion value set. A natural avenue of investigation is given a set of nonnegative integers
S, enumerate the number of games that have option value set S.

Theorem 5. For all S ⊂ ℕ0 such that S ≠ {}, {P : P ∗ = S} ≠ 0 if and only if |{P : P ∗ = S}|
is infinite.

Proof. If {P : P ∗ = S} = 0, then trivially |{P : P ∗ = S}| = 0. Let Q ∈ {P ∈ P : P ∗ = S}.


Note that the identity map on Q is an order compressing function. For a given positive
integer n, create Qn by replacing a fixed but arbitrary x ∈ Q with 2n + 1 copies of itself,
so that the copies form an antichain, and when Qn is order compressed to Q, each
element is sent to x. It follows from Theorem 4 that Q ≡ Qn . As n can be any integer,
this concludes the proof.

5 Conclusion
In this paper, we establish equivalencies in ideal play between poset games that have
obvious and not so obvious similarities. In particular, this work contributes to looking
at ideal play on connected poset games. Whether the converse of Theorem 4 is true
remains an open question. We end the paper by asking a related question.
Let the lexicographic product of two posets A and B, denoted here A ⊗ B, be the
poset given by the Cartesian product A×B ordered by the following rule: (a, b) ≤ (a′ , b′ )
if and only if

∙a ≤A a′ and ∙ if a = a′ , then b ≤B b′ .

We note that there is a natural order compressing function f : A ⊗ B → A where


each f -factor of A ⊗ B is isomorphic to B.

Conjecture 1. Let A, B be poset games such that 𝒢 (B) = 2n , where n ∈ ℕ0 , and B is


weakly canonical. Then for all (a, b) ∈ A ⊗ B,

𝒢 (A ⊗ B − (a, b)≤ ) = 𝒢 (B)𝒢 (A − a≤ ) + 𝒢 (B − b≤ ),

where multiplication and addition are standard for integers.


Advances in finding ideal play on poset games | 107

If true, then this would imply that for the lexicographic product A ⊗ B of any two
posets that satisfy the assumptions of Conjecture 1, 𝒢 (A ⊗ B) = 𝒢 (A)𝒢 (B). Moreover,
this implies that if the left factor A has nimber 0, then 𝒢 (A ⊗ B) = 0. Note that if B = 0
and B is weakly canonical, then A ⊗ B = 0. Verifying Conjecture 1 would point to the
possibility of the existence of other equations like that of the ordinal sum in Theorem 1.

Bibliography
[1] M. H. Albert, R. J. Nowakowski, D. Wolfe, Lessons in Play: An Introduction to Combinatorial Game
Theory, CRC Press, Boca Raton, 2019.
[2] S. Byrnes, Poset-game periodicity, Integers 3 (2003), Article G3.
[3] J. H. Conway, R. K. Guy, E. R. Berlekamp, Winning Ways for Your Mathematical Plays, Volume 1,
AK Peters, Natick, 1983.
[4] S. A. Fenner, J. Rogers, Combinatorial game complexity: an introduction with poset games,
arXiv:1505.07416 (2015).
[5] M. Fisher, R. J. Nowakowski, C. Santos, Sterling Stirling play, Internat. J. Game Theory 47(2)
(2018), 557–576.
[6] B. S. W. Schröder, Ordered Sets, Birkhäuser, Boston, 2003.
[7] M. Soltys, C. Wilson, On the complexity of computing winning strategies for finite poset games,
Theory Comput. Syst. 48(3) (2011), 680–692.
[8] D. Zeilberger, Chomp, recurrences and chaos, J. Difference Equ. Appl. 10(13–15) (2004),
1281–1293.
[9] D. Zeilberger, Three-rowed chomp, Appl. Math, 26(2) (2001), 168–179.
Erik D. Demaine and Yevhenii Diomidov
Strings-and-Coins and Nimstring are
PSPACE-complete
In memoriam Elwyn Berlekamp (1940–2019),
John H. Conway (1937–2020),
and Richard K. Guy (1916–2020)

Abstract: We prove that Strings-and-Coins, the combinatorial two-player game gener-


alizing the dual of Dots-and-Boxes, is strongly PSPACE-complete on multigraphs. This
result improves the best previous result, NP-hardness, argued in Winning Ways. Our
result also applies to the Nimstring variant, where the winner is determined by normal
play; indeed, one step in our reduction is the standard reduction (also from Winning
Ways) from Nimstring to Strings-and-Coins.

1 Introduction
Elwyn Berlekamp loved Dots and Boxes. He wrote an entire book The Dots and Boxes
Game: Sophisticated Child’s Play [3] devoted to explaining the mathematical under-
pinnings of the game, after they were first revealed in Berlekamp, Conway, and Guy’s
classic book Winning Ways exploring many such combinatorial games [2, Ch. 16]. At
book signings for both books1 and after talks he gave about these topics [1], Elwyn
routinely played simultaneous exhibitions of Dots and Boxes—him against dozens of
players, in the style of chess masters.
As many children will tell you, Dots-and-Boxes is a simple pencil-and-paper game
taking place on an m × n grid of dots. Two players alternate drawing edges of the grid
with one special rule: when a player completes the fourth edge of one or two 1×1 boxes,
that player gains one or two points, respectively, and must immediately draw another
edge (a “free move”, which is often a blessing and a curse). The game ends when all
grid edges have been drawn; then the player with the most points wins. (Draws are
possible on boards with an even number of squares.)

1 The first author had the honor of playing such a game against Elwyn at a book signing on April 13,
2004, at Quantum Books in Cambridge, Massachusetts. Elwyn won.

Acknowledgement: This work was initiated during an MIT class on Algorithmic Lower Bounds: Fun
with Hardness Proofs (6.892, Spring 2019). We thank the other participants of the class for providing
an inspiring research environment.

Erik D. Demaine, Yevhenii Diomidov, MIT Computer Science and Artificial Intelligence Laboratory,
Cambridge, Massachusetts, USA, e-mails: edemaine@mit.edu, diomidov@mit.edu

https://doi.org/10.1515/9783110755411-007
110 | E. D. Demaine and Y. Diomidov

An equivalent way to think about Dots-and-Boxes is in the dual of the grid graph.
Think of each 1 × 1 square as a dual vertex or coin worth one point, “tied down” by four
incident strings or dual edges. Interior strings connect two coins, whereas boundary
strings connect a coin to the ground (not worth any points). (Equivalently, boundary
edges have only one endpoint.) Now players alternate cutting (removing) strings, and
when a player frees one or two coins (removing the last strings attached to them), that
player gains the corresponding number of points and must move again. The game
ends when all strings have been cut; then the player with the most points wins.
Strings-and-Coins [2, pp. 550–551], [3, Ch. 2] is a generalization of this game to ar-
bitrary graphs, where vertices represent coins, and edges represent strings, which can
connect up to two coins (the other endpoints being considered “ground”). Nimstring
[2, pp. 552–554], [3, Ch. 6] is the closely related game, where we modify the win condi-
tion to normal play: the first player unable to move loses. Nimstring is known to be a
particular case of Strings-and-Coins, a fact we use in our results; see Lemma 1.

Related work
Dots-and-Boxes, Strings-and-Coins, and Nimstring are surprisingly intricate games
with intricate strategy [2, 3]. On the mathematical side, even 1 × n Dots-and-Boxes
is largely unsolved [8, 5].
To formalize this difficulty, Winning Ways [2] argued in 1984 that deciding the win-
ner of a Strings-and-Coins position is NP-hard by a reduction from vertex-disjoint cycle
packing. Around 2000, Eppstein [7] pointed out that this reduction can be adapted to
apply to Dots-and-Boxes as well; see [6].
This work left some natural open problems, first explicitly posed in 2001 [6]: are
Dots-and-Boxes, Strings-and-Coins, and Nimstring NP-complete or do they extend
into a harder complexity class? Being bounded two-player games, all three naturally
lie within PSPACE; are they PSPACE-complete?

Results
In this paper, we settle two out of three of these 20-year-old open problems by proving
that Strings-and-Coins and Nimstring are PSPACE-complete. This is the first improve-
ment beyond NP-hardness since the original Winning Ways result from 1984. Our re-
ductions from Game SAT are relatively simple but subtle. Along the way, we prove
the PSPACE-completeness of a new Strings-and-Coins variant called Coins-Are-Lava,
where the first person to free a coin loses.
Our constructed game positions rely on multigraphs with multiple copies of some
edges/strings, a feature not present in instances corresponding to Dots-and-Boxes.
Thus our results do not apply to Dots-and-Boxes. A generalization of Dots-and-Boxes
that we might be able to target is weighted Dots-and-Boxes, where each grid edge has a
specified number of times it must be drawn before it is “complete” and thus can form
the boundary of a 1 × 1 box. This game corresponds to Strings-and-Coins on planar
Strings-and-Coins and Nimstring are PSPACE-complete | 111

multigraphs whose vertices can be embedded at grid vertices such that edges have
unit length. However, our multigraphs are neither planar nor maximum-degree-4, so
they cannot be drawn on a square grid, and so our approach does not resolve the com-
plexity of weighted Dots-and-Boxes.
In independent work, Buchin, Hagedoorn, Kostitsyna, and van Mulken [4]
proved that (unweighted) Dots-and-Boxes is PSPACE-complete by a reduction from
Gpos (POS CNF) [9] (roughly the same problem that we reduce from, Gpos (POS DNF) [9]).
They construct an instance where, after variable setting, one player’s winning strategy
is to select a maximum set of disjoint cycles. This approach works well for Dots-and-
Boxes (and thus Strings-and-Coins), where the goal is to maximize score, but not for
Nimstring like our approach does. Thus the two approaches are incomparable.

2 Nimstring
We begin with more formal definitions of the games of interest and some known lem-
mas about them.

Definition 1 (Coin–String Multigraph). A multigraph G consists of vertices, also called


coins, and edges, also called strings, where each edge e ∈ E is a set of at most two
vertices in V. Notably, we allow edges incident to zero or one vertices in V; we view
the missing endpoints of such an edge as being connected to the ground.

Definition 2 (Strings-And-Coins(G) and Nimstring(G)).


Games Strings-And-Coins(G) and Nimstring(G) are played on a multigraph G by
two players, who alternate removing edges, and if a player frees one or two coins by
removing their last incident edges, then that player gains the corresponding number
of points (one or two) and must move again. The games end when there are no more
strings; in Strings-And-Coins(G) the player with the most points wins, whereas in
Nimstring(G) the first player unable to move loses.

Next, we prove the standard result that Nimstring is equivalent to a particular case
of Strings-and-Coins, and thus hardness of the former implies hardness of the latter.

Lemma 1 ([2, p. 552]). For every graph G, there exists an efficiently computable graph
H such that the winner of Nimstring(G) is the same as the winner of Strings-And-
Coins(H).

Proof. Let H = G ∪ Cn , where Cn is a cycle on n > |V(G)| vertices. If a player cuts


any string in this cycle, then the opponent can claim n > |V(H)|
2
coins in a single turn,
winning the game. Therefore the players will try to only cut edges in G, and the player
who cannot do so loses. This goal is equivalent to just playing Nimstring(G).

A final known result we will need is about “loony” positions in Nimstring.


112 | E. D. Demaine and Y. Diomidov

Lemma 2 ([2, p. 557]). If G has a degree-2 vertex adjacent to exactly one degree-1 vertex,
then the first player can always win in Nimstring(G). Such positions are known as loony
positions.

G′ G′

b b

a a

(a) String b connects to a vertex of degree at least 2. (b) String b connects to the ground.

Figure 7.1: Two loony positions.

Proof. Let a be the string between the two coins, let b be the other string connected to
a degree-2 coin, and let G′ be the rest of the graph (Figure 7.1). One of the players has
a winning strategy in Nimstring(G′ ).
– If the first player has a winning strategy in Nimstring(G′ ), then we cut strings a
and b in this order. We get exactly G′ , and it is still our turn. By assumption we
can win.
– If the second player has a winning strategy in Nimstring(G′ ), then we just cut
string b. We get graph G′ (plus an extra edge that does not affect the game), and
it is our opponent’s turn. By assumption opponent cannot win.

3 Coins-are-Lava
We introduce a variant game played on strings and coins that we find easier to analyze,
called Coins-are-Lava.2

Definition 3 (Coins-Are-Lava(G)). Game Coins-Are-Lava(G) is played on a multi-


graph G by two players, who alternate removing edges, and if a player frees a coin,

2 For a “practical” motivation for this game, consider the 1933 Double Eagle U. S. coin: until 2002,
possession of this coin could result in imprisonment [10].
Strings-and-Coins and Nimstring are PSPACE-complete | 113

then that player loses. Equivalently, players are forbidden from removing an edge that
would free a coin, and the winner is determined according to normal play.

Now we show that Coins-are-Lava is a particular case of Nimstring. Thus its hard-
ness will imply the hardness of both Nimstring and (by Lemma 1) Strings-and-Coins.

Lemma 3. For every graph G, there exists an efficiently computable graph H such that
the winner of Coins-Are-Lava(G) is the same as the winner of Nimstring(H).

G′ G′ G′ G′ G′ G′

(a) (b) (c) (d) (e) (f)

Figure 7.2: (a) The graph H; (b) freeing a coin in G results in a loony position; (c–f) cutting a string
outside G results in a loony position.

Proof. Let H be a graph obtained from G by connecting every coin to the ground with
a long chain (length ≥ 5); see Figure 7.2a.
If a player cuts a string in one of these chains or cuts all strings in G attached to
the same coin, then this creates a loony position and ends their turn; see Figure 7.2. By
Lemma 2 their opponent can then win.
Therefore the players will try to avoid cutting strings outside G or freeing a coin
in G. The first player to fail to do so loses. This goal is equivalent to Coins-Are-Lava(G).

4 PSPACE-hardness
It remains to prove that Coins-Are-Lava(G) is PSPACE-complete. Our reduction is from
the following known PSPACE-complete problem.
114 | E. D. Demaine and Y. Diomidov

Definition 4 (Game-SAT(ℱ )). Given a positive DNF formula ℱ (an or of ands of vari-
ables without negation), Game-SAT(ℱ ) is the following game played by two players,
Trudy and Fallon. Initially, each variable is unset. In each turn the player may set a
variable to true or false, or the player may skip their turn (do nothing). The game
ends when all variables are set; then Trudy wins if formula ℱ is true, whereas Fallon
wins if formula ℱ is false.

We allow players to skip turns and to set variables to the “wrong” value (Trudy
to false or Fallon to true). The player with a winning strategy can always avoid such
moves, however, replacing them with dominating “good” moves that do not skip and
play the “right” value (Trudy to true or Fallon to false), as such moves never hurt the
winning player’s final goal.
Schaefer [9] proved that this game is PSPACE-complete under the name
Gpos (POS DNF).

Theorem 1. Coins-are-Lava is PSPACE-complete.

Proof. Let ℱ be a positive DNF formula with n variables, m clauses, and ki occurrences
of each variable xi . Without loss of generality, every clause contains at least two vari-
ables, and every variable appears in at least one clause. Fix a sufficiently large number
N ≫ m2 n2 .
First, we define several useful gadgets, which will be connected together via
shared coins (merging the output coin of one gadget with the input coin of another
gadget). Many of these gadgets are parameterized by an integer level. Intuitively, do-
ing anything to a level-(ℓ + 1) gadget requires an order of magnitude more time than
doing anything to a level-ℓ gadget. This way we can make sure that players interact
with gadgets in the right order. However, since each level-ℓ gadget uses N θ(ℓ) strings,
we can only use a constant number of levels.

5 :=

Figure 7.3: A width-5 rope.

A rope (Figure 7.3) is a collection of strings that share both endpoints. The number
of strings in a rope is called its width. We say that a rope has been cut when all of its
strings have been cut. When the game ends, every rope has either been cut completely,
Strings-and-Coins and Nimstring are PSPACE-complete | 115

(a) Initial state (unset). (b) Variable set to true and false, respectively.

Figure 7.4: Variable gadget.

(a) Initial state. (b) Disabled and activated wires.

Figure 7.5: Wire gadget.

or it has only one string remaining. (Otherwise, a string in the rope can always be safely
cut without freeing any coin.)
A variable gadget (Figure 7.4) consists of a chain of two strings, where the bottom
string is connected to the ground, and the top string is connected to an output coin.
We say that is set to false if the bottom string has been cut, set to true if the top string
has been cut, and unset if neither string has been cut. A variable implicitly has level 0.
A level-ℓ wire gadget (Figure 7.5) consists of a chain of two ropes, a width-N 2ℓ−1
bottom rope connected to an input coin and a width-N 2ℓ top rope connected to an out-
put coin. We say that it is disabled if the input rope has been cut and activated if the
top rope has been cut. The HP (Hit Points) of the wire is the number of strings remain-
ing in the bottom rope. Note that activating a wire takes a factor of N more moves than
disabling it. This means that if one player is racing to activate a wire and the other is

Figure 7.6: Clause gadget.


116 | E. D. Demaine and Y. Diomidov

racing to disable it, then the disabler will win the race. Intuitively, the only case where
a wire will get activated is if disabling the wire would free a coin.
A level-ℓ clause gadget (Figure 7.6) consists of a single width-N 2ℓ−1 rope connected
to an input coin and the ground. We say that it is disabled if the rope has been cut. The
HP of the clause is the number of strings remaining in the rope.
The winner is determined solely by the parity of the number of removed strings.
We can easily flip this parity, for example, by adding an extra ground-to-ground string.
So without loss of generality, Fallon wins if (but not only if) every variable and wire
has one string remaining and all m clauses have no strings. Then Trudy wins if every
variable and wire has one string remaining, m−1 clauses have no strings, and the final
clause has one string. In fact, we will show that the game has to end in one of these
two specific ways.
Let ℱ ′ be a new formula with the following clauses:
– all clauses from ℱ , which we call real clauses;
– for every variable, a singleton clause containing just that variable; and
– one additional empty clause that contains no variables and is always satisfied.

We construct a multigraph G by connecting the gadgets as follows; refer to Figure 7.7:


– a variable gadget for each variable;
– a clause gadget for each clause;
– a level-1 wire from each variable xi to each of ki real clauses that contain that vari-
able;
– ki − 1 level-1 wires from each variable to the corresponding singleton clause;
– a single vertex called the root coin;
– a level-2 wire from the root coin to every real clause and every singleton clause;
and
– n + m − 1 level-2 wires from the root coin to the empty clause.

First, we describe how typical gameplay in G should look (without proofs) to give some
intuition for why this construction makes sense, and then we prove that it works more
formally. Typical gameplay divides into four sequential phases:
1. First, Trudy and Fallon set variable gadgets to true and false respectively.
2. Then the players disable all wires from false variables and disable all but one wire
from each true variable (disabling all wires from a true variable would free a coin).
Then they activate the level-1 wires that have not been disabled (one from each
true variable). Note that almost half the wires from each variable go to singleton
clauses. If all real clauses are false, then those wires form the majority, and Fallon
can ensure that one of them gets activated. However, if even one real clause is
true, then the wires to true clauses (real or singleton) now form a majority, and
Trudy can ensure that one of them gets activated.
3. Then the players disable all but one level-2 wire and activate the remaining level-2
wire (disabling all of them would free a coin). Almost half of these wires go to the
Strings-and-Coins and Nimstring are PSPACE-complete | 117

Figure 7.7: Graph G for formula (x1 ∧ x2 ∧ x3 ) ∨ (x2 ∧ x3 ) ∨ (x3 ∧ x4 ). Clauses are labeled and colored
according to whether they are empty (“empty” and gray, at the top), singleton (“xi ” and green), or
real (“xi ∧ xj ⋅ ⋅ ⋅” and orange). Dotted lines indicate that there are supposed to be ki − 1 wires there,
but ki − 1 = 0.

empty clause. If all real clauses are false, then they form a majority, and Fallon
can ensure that one of them gets activated. However, if even one real clause is true,
then it together with the empty clause forms a majority, and Trudy can ensure that
one of them gets activated.
4. Finally, the players disable the clause gadgets. A clause can be disabled unless all
wires pointing at it got activated (in that case, disabling it would free a coin). If
the formula is not satisfied, then all clauses get disabled, and Fallon wins. If the
formula is satisfied, then exactly one clause remains, and Trudy wins.

We want to show that the winner of Coins-Are-Lava(G) is the same as the winner
of Game-SAT(ℱ ). We do a case split on the winner of Game-SAT(ℱ ) and in each case
provide a winning Coins-Are-Lava(G) strategy for that player.
118 | E. D. Demaine and Y. Diomidov

If Fallon can win Game-SAT(ℱ ), then they can win Coins-Are-Lava(G) using the
following strategy (where numbers match the phases of intended gameplay above):
1. There is a natural mapping f from states of Coins-Are-Lava(G) to states of
Game-SAT(ℱ ): a variable xi in Game-SAT(ℱ ) is set to true if the corresponding
variable gadget is set to true, set to false if the gadget is set to false, and unset if
the gadget is unset. Every move in Coins-Are-Lava(G) maps to a valid move in
Game-SAT(ℱ ), where moves outside of variable gadgets map to skip moves. Also,
if we played Coins-Are-Lava(G) for less than 2n moves, then we can perform any
move that is valid in the corresponding Game-SAT(ℱ ) state. This does not free a
coin, because the relevant coin has degree Ω(N) ≫ 2n. So we can transfer the
strategy from Game-SAT(ℱ ) to Coins-Are-Lava(G): for every opponent’s move in
Coins-Are-Lava(G), map it to Game-SAT(ℱ ), find the best response, and map it
back to Coins-Are-Lava(G). We remain in this phase until we have set all variable
gadgets to some assignment that does not satisfy ℱ , as guaranteed by the winning
strategy in Game-SAT(ℱ ).
2. Call a level-1 wire from a true variable xi good if it points at a real clause and bad
if it points at a singleton clause. Wires from false variables are neutral. Each true
variable xi has ki good wires and ki − 1 bad ones. For each true variable xi , the total
HP of bad wires is still at most (ki − 1)N 1 , and total HP of all good wires is at least
ki N 1 − O(n) > (ki − 1)N 1 (the opponent could cut up to O(n) strings here while we
were setting variables).
(a) Disable all bad wires, Specifically, if the opponent reduced HP of a good wire
connected to some true variable xi , then we respond by reducing HP of a bad
wire connected to the same xi ; if the opponent did something else or xi has
no bad wires left, then we reduce HP of a bad wire connected an arbitrary
variable xj . This maintains the invariant that for each true variable xi , HP of
good wires of xi is higher than HP of bad wires of xi . The opponent cannot
activate any bad wires because that would take Θ(N 2 ) ≫ ∑i ki N 1 moves.
(b) Disable good and neutral level-1 wires until there is only one good wire re-
maining per true variable. Once again, the opponent cannot activate these
wires because that would take too many moves.
(c) Activate the remaining good wires. Opponent cannot disable these wires, be-
cause that would free a coin.
(d) We have activated exactly one good wire per true variable. There are no acti-
vated level-1 wires pointing at satisfied clauses, because real clauses are un-
satisfied and singleton clauses are bad.
3. Call a level-2 wire good if it points to a real or singleton clause and bad if it points
to the empty clause. The total HP of the n + m − 1 bad wires is at most (n + m − 1)N 3 ,
and the total HP of the n + m good wires is still (after O(nmN 2 ) moves spent in the
first two stages) at least (n + m)N 3 − O(nmN 2 ) > (n + m − 1)N 3 .
(a) Disable all bad wires. The opponent cannot disable all good wires before we
disable the bad ones because good wires have more HP.
Strings-and-Coins and Nimstring are PSPACE-complete | 119

(b) Disable all but one good wire. Because all disabling and activating steps done
so far are for wires of HP Θ(N 3 ) and activating a level-2 wire requires Θ(N 4 )
moves, the opponent cannot afford to activate any of these good wires before
we disable them.
(c) Activate the last good wire. The opponent cannot disable it because that
would free the root coin.
4. Disable all clause gadgets. This will not free a coin, because every clause has
at least one disabled wire: real clauses are unsatisfied, so there is a false vari-
able whose adjacent wire we disabled in Step 2(b); singleton clauses have bad
level-1 wires that we disabled in Step 2(a); and the empty clause has a bad level-2
wire that we disabled in Step 3(a). We win because there are no clause gadgets
remaining.

If Trudy can win Game-SAT(ℱ ), then they can win Coins-Are-Lava(G) using the fol-
lowing strategy (where numbers match the phases of intended gameplay above):
1. Set the variable gadgets to some assignment that satisfies ℱ . Let C ∈ ℱ be a satis-
fied clause.
2. Call a level-1 wire from a true variable xi ∈ C good if it points at a singleton clause
or C and bad otherwise. Wires from variables not in C are neutral. Each variable
xi ∈ C has ki − 1 bad wires (ki to real clauses, but one of them is C) and ki good ones
(ki − 1 to the singleton clause and one to C). Disable all bad wires, then disable
all neutral wires and all-but-one good wire per variable in C, and then activate
the remaining good wires. Each activated wire points to a singleton clause or to C.
Then either all of them point to C, or at least one of them points to a singleton
clause. Either way, we have some satisfied real or singleton clause C ′ with only
activated level-1 wires.
3. Call a level-2 wire good if it points to C ′ or to the empty clause. There are n + m − 1
bad wires (n + m to real clauses, but one of them is C ′ ) and n + m bad ones (n + m − 1
to the empty clause plus one to C ′ ). Disable all bad wires and activate exactly one
good wire. Let C ′′ be the clause pointed by the activated wire (either C ′ or the
empty clause).
4. Disable all clause gadgets other than C ′′ . This will not free a coin, because every
clause other than C ′′ has a disabled level-2 wire. However, C ′′ cannot be disabled,
because all of the wires pointing at it have been activated. We win because there
is exactly one clause gadget remaining.

Corollary 1. Nimstring is PSPACE-complete.

Proof. This follows immediately from Theorem 1 and Lemma 3.

Corollary 2. Strings-and-Coins is PSPACE-complete.

Proof. This follows immediately from Corollary 1 and Lemma 1.


120 | E. D. Demaine and Y. Diomidov

5 Open problems
We have proved the PSPACE-completeness of Strings-and-Coins and Nimstring on
multigraphs, whereas Buchin et al. [4] proved the PSPACE-completeness of Dots-and-
Boxes and thus Strings-and-Coins on grid graphs. The main open problem is whether
Dots-and-Boxes with normal play instead of scoring, i. e., Nimstring on grid graphs,
is also PSPACE-complete. Toward this goal, we could also aim to prove the PSPACE-
completeness of Nimstring on simple graphs (with only one copy of each edge/string)
or planar graphs.

Bibliography
[1] American Mathematical Society, Elwyn Berlekamp Gives Arnold Ross Lecture, http://www.ams.
org/programs/students/arl2004, 2003.
[2] E. Berlekamp, J. Conway, and R. Guy, Winning Ways for Your Mathematical Plays, Volume 3, A K
Peters, Wellesley MA, 2003.
[3] E. Berlekamp, The Dots and Boxes Game: Sophisticated Child’s Play, A K Peters,
Massachusetts, 2000.
[4] K. Buchin, M. Hagedoorn, I. Kostitsyna, and M. van Mulken, Dots & boxes is PSPACE-complete,
2105.02837, 2021.
[5] S. Collette, E. Demaine, M. Demaine, and S. Langerman, Narrow misère dots-and-boxes, Games
of No Chance 4, Cambridge University Press, 2015.
[6] E. Demaine and R. Hearn, Playing games with algorithms: algorithmic combinatorial game
theory, Games of No Chance 3, Cambridge University Press, 2009.
[7] D. Eppstein, Computational Complexity of Games and Puzzles, http://www.ics.uci.edu/
~eppstein/cgt/hard.html.
[8] R. Guy and R. Nowakowski, Unsolved problems in combinatorial games, in More Games of No
Chance, Cambridge University Press, 2002.
[9] T. Schaefer, On the complexity of some two-person perfect-information games, J. Comput.
System Sci. 16(1978), 185–225.
[10] United States Mint, The United States government to sell the famed 1933 double eagle,
the most valuable gold coin in the world, https://www.usmint.gov/news/press-releases/
20020207-the-united-states-government-to-sell-the-famed-1933-double-eagle-the-most-
valuable-gold-coin-in-the-world, 2002.
Eric Duchêne, Marc Heinrich, Richard Nowakowski, and
Aline Parreau
Partizan subtraction games
Abstract: Partizan subtraction games are combinatorial games where two players, say
Left and Right, alternately remove a number n of tokens from a heap of tokens, with
n ∈ Sℒ (resp., n ∈ Sℛ ) when it is Left’s (resp., Right’s) turn. The first player unable to
move loses. These games were introduced by Fraenkel and Kotzig in 1987, where they
introduced the notion of dominance, i. e., an asymptotic behavior of the outcome se-
quence where Left always wins if the heap is sufficiently large. In the current paper,
we investigate other kinds of behaviors for the outcome sequence. In addition to dom-
inance, three other disjoint behaviors are defined, weak dominance, fairness, and ul-
timate impartiality. We consider the problem of computing this behavior with respect
to Sℒ and Sℛ , which is connected to the well-known Frobenius coin problem. General
results are given, together with arithmetic and geometric characterizations when the
sets Sℒ and Sℛ have size at most 2.

1 Introduction
Partizan subtraction games were introduced by Fraenkel and Kotzig [2] in 1987. They
are two-player combinatorial games played on a heap of tokens. Each player is as-
signed a finite set of integers, denoted Sℒ (for the Left player), and Sℛ (for the Right
player). A move consists in removing a number m of tokens from the heap, provided
that m belongs to the set of the player. The first player unable to move loses. When
Sℒ = Sℛ , the game is impartial and known as the standard subtraction game; see [1].
We now recall the useful notations and definitions coming from combinatorial
game theory. More information can be found in the reference book [7]. There are two
basic outcome functions: for a position g,

L if Left moving first has a winning strategy,


oL (g) = {
R otherwise,

Acknowledgement: Supported by the ANR-14-CE25-0006 project of the French National Research


Agency.

Eric Duchêne, LIRIS UMR CNRS, Université Lyon 1, Lyon, France, e-mail: eric.duchene@univ-lyon1.fr
Marc Heinrich, University of Leeds, School of Computing, Leeds, United Kingdom, e-mail:
marc.heinrich@free.fr
Richard Nowakowski, Department of Mathematics and Statistics, Dalhousie University, Halifax, Nova
Scotia, Canada, e-mail: R.Nowakowski@Dal.Ca
Aline Parreau, LIRIS UMR CNRS, Université Lyon 1, Lyon, France, e-mail: aline.parreau@univ-lyon1.fr

https://doi.org/10.1515/9783110755411-008
122 | E. Duchêne et al.

and

R if Right moving first has a winning strategy,


oR (g) = {
L otherwise.

It is usual to talk of the outcome of a position g and the associated outcome func-
tion o(g),
– For oL (g) = oR (g) = L, Left wins regardless of who moves first, written o(g) = ℒ.
– For oL (g) = oR (g) = R, Right wins regardless of who moves first, o(g) = ℛ.
– For oL (g) = L, oR (g) = R, the player who starts has a winning strategy, o(g) = 𝒩 .
– For oL (g) = R, oR (g) = L, the second player has a winning strategy, o(g) = 𝒫 .

In the outcome function, there should be a reference to the game/rules. In this paper,
the position will be a number, but the rules will be clear from the context, so the rules
will not be included in the function.
A partizan subtraction game G with rules (Sℒ , Sℛ ) will be denoted (Sℒ , Sℛ ) in the
rest of the paper. A game position of G will be simply denoted by an integer n cor-
responding to the size of the heap. The outcome sequence of G is the sequence of the
outcomes for n = 0, 1, 2, 3, . . . , i. e., o(0), o(1), o(2), . . . . A well-known result ensures that
the outcome sequence of any impartial subtraction game is ultimately periodic [7].
Note that in that case the outcomes only have the values 𝒫 and 𝒩 since the game is
impartial. In [2], this result is extended to partizan subtraction games.

Theorem 1 (Fraenkel and Kotzig [2]). The outcome sequence of any partizan subtrac-
tion game is ultimately periodic.

Example 2. Consider the partizan subtraction game G = ({1, 2}, {1, 3}). The outcome
sequence of G is

𝒫 𝒩 ℒ 𝒩 ℒ ℒ ℒ ℒ ⋅⋅⋅.

In this particular case the periodicity of the sequence can be easily proved by showing
by induction that the outcome is ℒ for n ≥ 4.

Such a behavior where the outcome sequence has period 1 is rather frequent
for partizan subtraction games. In that case the period is only ℒ or ℛ. In their pa-
per, Fraenkel and Kotzig called this property dominance. More precisely, we say that
Sℒ ≻ Sℛ , or that Sℒ dominates Sℛ , if there exists an integer n0 such that the outcome
of the game (Sℒ , Sℛ ) is always ℒ for all n ≥ n0 . By symmetry a game satisfying Sℒ ≺ Sℛ
is always ℛ for all sufficiently large heap sizes. When a game satisfies neither Sℒ ≻ Sℛ
nor Sℒ ≺ Sℛ , the sets Sℒ and Sℛ are said to be incomparable, denoted by Sℒ ‖Sℛ . In
[2], several instances have been proved to satisfy the dominance property (i. e., the
games ({1, 2m}, {1, 2n + 1}), and ({1, 2m}, {1, 2n})) or to be incomparable like ({a}, {b}). It
is also shown that the dominance relation is not transitive. Note that in [5] the game
Partizan subtraction games | 123

values (i. e., a refinement of the outcome notion) have been computed for the games
({1, 2}, {1, k}).
In the literature, partizan taking and breaking games have not been so much
considered. A more general version, where it is also allowed to split the heap into
two heaps, was introduced by Fraenkel and Kotzig [2] and is known as partizan octal
games. A particular case of such games, called partizan splittles, was considered in [4],
where, in addition, Sℒ are Sℛ are allowed to be infinite sets. Another variation with
infinite sets is when Sℒ and Sℛ make a partition of ℕ [3]. In such cases the ultimate
periodicity of the outcome sequence is not necessarily preserved.
In the current paper, we propose a refinement of the structure of the outcome se-
quence for partizan subtraction games. More precisely, when the sets Sℒ and Sℛ are in-
comparable, different kinds of periodicity can occur. The following definition presents
their classification.

Definition 3. The outcome sequence of G = Subtraction(Sℒ , Sℛ ) is:


– 𝒮𝒟 (strongly dominating) for Left (resp., Right), and we write Sℒ ≻ Sℛ (resp.,
Sℒ ≺ Sℛ ) if any position n large enough has outcome ℒ (resp., ℛ). In other words,
the period is reduced to ℒ (resp., ℛ).
– 𝒲𝒟 (weakly dominating) for Left (resp., Right), and we write Sℒ >w Sℛ if the
period contains at least one ℒ and no ℛ (or, resp., one ℛ and no ℒ).
– ℱ (fair) if the period contains both ℒ and ℛ.
– 𝒰ℐ (ultimately impartial) if the period contains no ℒ and no ℛ.

Remark 4. Note that inside a period, not all the combinations of 𝒫 , 𝒩 , ℒ, and ℛ are
possible. For example, in a game that is not 𝒰ℐ , a period that includes 𝒫 must in-
clude 𝒩 . Indeed, assume on the contrary that it is not the case and let n be a position
of outcome 𝒫 in the period, where the period has length p. Let a ∈ Sℒ . Now the position
n + a is in the period, and o(n + a) = ℒ since Left can win going first and, by assump-
tion, o(n + a) ≠ 𝒩 . For the same reason, o(n + 2a) = ℒ. By repeating this argument,
o(n + ka) = ℒ for all k. Since n is in the period, we now have 𝒫 = o(n) = o(n + pa) = ℒ,
a contradiction.

Whereas the literature detailed above give examples of 𝒮𝒟 and 𝒰ℐ games (as im-
partial subtraction games are 𝒰ℐ ), we will see later in this paper examples of 𝒲𝒟
games (e. g., Lemma 15). We now give an example of a fair game.

Example 5. Let Sℒ = {c, c + 1} and Sℛ = {1, b} with b = c(c + 1) and c > 1. Then the
game (Sℒ , Sℛ ) is ℱ .

Proof. We proceed by induction on the size of the heap to show that there are infinitely
many ℒ and ℛ. Since c > 1, we have o(1) = ℛ and o(c + 1) = ℒ. Now we assume that
for some n, o(n) = ℒ, and show that o(n + b + c) = ℒ. In the position n + b + c, Left
considers these as the two heaps n and b + c, and if Right removes 1, then Left regards
this as a move in the n component, else it is a move in the second heap. Left moving
124 | E. Duchêne et al.

first applies her winning strategy on n and then, regardless of whether Left moved
first or second, responds in the remnants of n heap whenever Right removes 1 token.
If at some point, Right chooses to remove b tokens, then Left answers immediately by
removing c tokens, eliminating the second heap. In that case, Left wins at the end by
applying her winning strategy on n. On the contrary, if Right never plays b, then Left
empties the n component, and it is Right’s turn from the b + c position. Again, from
b + c, playing b is a losing move for Right. If he plays 1, then Left plays c + 1, leading
to the position b − 2 = (c − 1)(c + 2). All the next legal moves of Right are 1, and all the
answers of Left are c + 1, which guarantees to empty the position and hence win the
game.
Assume now that o(n) = ℛ, and we show that o(n+b+c) = ℛ. As previously, Right
considers this position as the two heaps n and b + c. He applies his winning strategy
on n, and any move c of Left leads Right to answer by removing b tokens, leaving a
winning position for Right. Hence assume that Left plays c + 1 until Right wins on n.
At this point, Left has to play from a position k + b + c with k < c. If k = 0, then Left
loses for the same reasons as in the above case (as the position b + c is 𝒫 ). Otherwise,
any move c or c + 1 of Left is followed by a move b of Right, leading to a position with
at most k tokens, from which Left cannot play and loses.

The paper is organized as follows. In Section 2, we consider the two decision prob-
lems related to the computation of the outcome of a game position and of the behavior
of the outcome sequence. Links with the Frobenius coin problem and the knapsack
problem are given. Then we try to characterize the behavior of the outcome sequence
(𝒮𝒟, 𝒲𝒟, ℱ or 𝒰ℐ ) according to Sℒ and Sℛ . When Sℒ is fixed, Section 3 gives gen-
eral results about strong and weak dominance according to the size of Sℛ . In Sec-
tions 4 and 5, we characterize the behavior of the outcome sequence when |Sℛ | = 1
and |Sℒ | ≤ 2. Section 5 is devoted to the case |Sℒ | = |Sℛ | = 2, where it is proved that
the sequence is mostly strongly dominating.

2 Complexity
Computing the outcome of a game position is a natural question when studying combi-
natorial games. For partizan subtraction games, we know that the outcome sequence
is eventually periodic. This implies that if Sℒ and Sℛ are fixed, then computing the
outcome of a given position n can be done in polynomial time. However, if the sub-
traction sets are part of the input, then the algorithmic complexity of the problem is
not so clear. This problem can be expressed as follows:

psg outcome
Input: two sets of integers Sℒ and Sℛ , a game position n
Output: the outcome of n for the game (Sℒ , Sℛ )
Partizan subtraction games | 125

In the next result, we show that this problem is actually NP-hard.

Theorem 6. psg outcome is NP-hard, even in the case where the set of one of the players
is reduced to one element.

Proof. We use a reduction to Unbounded Knapsack Problem defined as follows.

Unbounded Knapsack Problem


Input: a set S and an integer n
Output: can n be written as a sum of nonnegative multiples of S?

Unbounded Knapsack Problem was shown to be NP-complete in [9].


Let (S, n) be an instance of unbounded knapsack problem, where S is a finite set
of integers, and n is a positive integer. Without loss of generality, we can assume that
1 ∈ ̸ S since otherwise the problem is trivial. We consider the partizan subtraction game
where Left can only play 1, and Right can play any number x such that x + 1 ∈ S. In
other words, we have SL = {1} and SR = S − 1. We claim that for this game, Right has a
winning strategy playing second if and only if n can be written as a sum of nonnegative
multiples of elements of S.
Observe that during one round (i. e., one move of Left followed by one move of
Right), if x is the number of tokens that were removed, then x ∈ S. Suppose that Right
has a winning strategy and consider any play where Right plays according to this strat-
egy. Then Right makes the last move, and after this move, no token remains. Indeed,
if there was at least one token remaining, then Left could still remove this token and
continue the game. At each round an element of S was removed, and at the end, no
tokens remain. This implies that n is a sum of nonnegative multiples of S.
In the other direction, if n is a sum of nonnegative multiples of S, then we can
write n = ∑x∈S nx x. A winning strategy for Right is simply to play nx times the move
x − 1 for each x ∈ S.

Remark 7. In the case of impartial subtraction games (i. e., Sℒ = Sℛ ), there is no


known result about the complexity of this problem. This is surprising as these games
have been thoroughly investigated in the literature.

The second question that emerged from partizan subtraction games is the behav-
ior of the outcome sequence according to Definition 3. It can also be formulated as a
decision problem.

psg sequence
Input: two sets of integers Sℒ and Sℛ
Output: is the game (Sℒ , Sℛ ) 𝒮𝒟, 𝒲𝒟 (and not 𝒮𝒟), ℱ , or 𝒰ℐ ?

Unlike psg outcome, the algorithmic complexity is open for psg sequence. In the
next sections, we consider this problem for some particular cases. In addition, we can
126 | E. Duchêne et al.

wonder whether the knowledge of the sequence could help to compute the outcome
of a game position. The answer is no, even if the game is 𝒮𝒟:

Proposition 8. Let Sℒ = {a1 , . . . , an } be such that gcd(a1 + 1, . . . , an + 1) = 1, and let


Sℛ = {1}. The game (Sℒ , Sℛ ) is 𝒮𝒟 for Left, but computing the length of the preperiod is
NP-hard.

The proof is based on the well-known coin problem (also called Frobenius prob-
lem).

coin problem
Input: a set of n positive integers a1 , . . . , an such that gcd(a1 , . . . , an ) = 1
Output: the largest integer that cannot be expressed as a linear combination of
a1 , . . . , an .

This value is called the Frobenius number. For n = 2, the Frobenius number equals
a1 a2 − a1 − a2 [8].1 No explicit formula is known for larger values of n. Moreover, the
problem has been proved to be NP-hard in the general case [6].

Proof of Proposition 8. Under the assumptions of the proposition, we will show that
the length of the preperiod is exactly the Frobenius number of {a1 +1, . . . , an +1}. Indeed,
let N be the Frobenius number of {a1 + 1, . . . , an + 1}. Then N + 1, N + 2, . . . can be written
as linear combinations of {a1 + 1, . . . , an + 1}. Note that in the game (Sℒ , Sℛ ), any round
(sequence of two moves) can be seen as a linear combination of {a1 + 1, . . . , an + 1}, as
Left plays ai and Right plays 1. Hence if Right starts from N + 1, then Left follows the
linear combination for N + 1 to choose her moves, so as to play an even number of
moves until the heap is empty. For the same reasons, if Right starts from N + 2, then
Left has a winning strategy as a second player. Since Right’s first move is necessarily
1, this means that Left has a winning strategy as a first player from N + 1. Thus the
position satisfies o(N + 1) = ℒ. Using the same arguments, this remains true for all
positions greater than N + 1. In other words, it proves that the game is 𝒮𝒟 for Left.
Now we consider the position N and show that o(N) ≠ ℒ. Indeed, assume that Right
starts and Left has a winning strategy. This means that an even number of moves will
be played. According to the previous remark, the sequence of moves that is winning
for Left is necessarily a linear combination of {a1 + 1, . . . , an + 1}. This contradicts the
Frobenius property of N.

This correlation between partizan subtraction games and the coin problem will
be reused further in this paper.

1 Although not germane to this paper, Sylvester’s solution is central to the strategy stealing argument
that proves that naming a prime 5 or greater is a winning move in sylver coinage [1, Ch. 18].
Partizan subtraction games | 127

3 When Sℒ is fixed
In this section, we consider the case where Sℒ is fixed and study the behavior of the
sequence as Sℛ varies. In particular, we look for sets Sℛ that make the game (Sℒ , Sℛ )
favorable for Right. This can be seen as a prelude to the game where players would
choose their sets before playing: if Left has chosen her set Sℒ , can Right force the game
to be asymptotically more favorable for him?

3.1 The case |Sℛ | > |Sℒ |


If Sℛ can be larger than Sℒ , then it is always possible to obtain a game favorable for
Right, as it is proved in the following theorem.

Theorem 9. Let Sℒ be any finite set of integers. Let p be the period of the impartial sub-
traction game played with Sℒ , and let Sℛ = Sℒ ∪ {p}. Then Right strongly dominates the
game (Sℒ , Sℛ ), i. e., the game (Sℒ , Sℛ ) is ultimately ℛ.

Proof. Let n0 be the preperiod of the impartial subtraction game played on Sℒ , and m
let be the maximal value of Sℒ . We prove that Right wins if he starts on any heap of
size n > n0 + p, which implies that the outcome on (Sℒ , Sℛ ) is ℛ for any heap of size
n > n0 + p + m.
If n is an 𝒩 -position for the impartial subtraction game on Sℒ , then Right follows
the strategy for the first player, never uses the value p, and wins.
If n is a 𝒫 -position, Right takes p tokens, leaving Left with a heap of size n−p > n0 ,
which is, using periodicity, also a 𝒫 -position in the impartial game. After Left’s move,
we are in the case of the previous paragraph and Right wins.
Note that in the previous theorem, Sℛ contains the set Sℒ and thus has a large
common intersection. We prove in the next theorem that if Sℛ cannot contain any
value in Sℒ , then it is still possible to have a game that is at least fair for Right (i. e., it
contains an infinite number of ℛ-positions). Note that we do not know if for any set
Sℒ , there is always a set Sℛ with |Sℛ | = |Sℒ | + 1 and Sℛ ∩ Sℒ = 0 that is (weakly or
strongly) dominating for Right.

Theorem 10. For any set Sℒ , there exists a set Sℛ with Sℒ ∩ Sℛ = 0 and |Sℛ | = |Sℒ | + 1
such that the resulting game contains an infinite number of ℛ-positions.

Proof. Let n be any integer such that the set A = {n − m, m ∈ Sℒ } is a set of positive
integers that is disjoint from Sℒ . Putting Sℛ = A ∪ {n} gives a set that satisfies the
condition of the theorem and the game (Sℒ , Sℛ ).
We claim that o(kn) = ℛ for k = 1, 2, . . . . If Left starts on a position kn by removing
m tokens, then Right can answer by taking n − m tokens and leaves (k − 1)n tokens, and
by induction Right wins. If Right starts, then he takes n tokens, and, again, Left has a
multiple of n and loses.
128 | E. Duchêne et al.

Consequently, if Right has a small advantage on the size of the set, then he can
ensure that the sequence of outcomes contains an infinite number of ℛ-positions. So
having a larger subtraction set seems to be an important advantage. However, having a
larger set is not always enough to guarantee dominance. Indeed, we have the following
result.

Theorem 11. Let G = (SL , SR ) be a partizan subtraction game. Assume that |SL | ≥ 2 and
that G is eventually L with preperiod at most p. Let x1 , x2 ∈ SL with x1 < x2 , and let d be
an integer with d > p + max(SR ∪ {x2 − x1 }). Then Gd = (SL , SR ∪ {d}) is eventually L with
preperiod at most (d + x2 )⌈ xd+x
−x
2

2 1

Proof. Let G, d, x1 , and x2 be as in the statement of the theorem. We first prove the
following claim.

Claim 12. In the game Gd , if oL (n) = ℒ (resp., oR (n) = ℒ), then Left has a winning strat-
egy on n + (d + x) as the first (resp., second) player with x ∈ SL .

Proof of Claim 12. We will show the result by induction on n.


First, assume that oR (n) = ℒ. We will show that there is a winning strategy for Left
playing second on n + d + x. Starting from the position n + d + x, there are three possible
cases:
– Right plays y ∈ SR with y ≤ n. By the assumption on n, Left wins as the first player
on n − y, and using the induction hypothesis, he also wins as the first player on
n − y + d + x. Therefore Left wins as the second player on n + d + x.
– Right plays y ∈ SR with y > n. Now Left answers by playing x. This leads to the
position (n − y) + d with (n − y) + d > p by the assumption on d, and n − y + d < d by
the assumption on y. Since n − y + d < d, Right can no longer play his move d, and
the outcome of Gd on n − y + d is the same as the outcome of G on this position.
Since n − y + d > p, Left wins playing second on this position.
– Right plays d. Then Left answers by playing x, leading to the position n on which
Left wins as second player by assumption.

Suppose now that Left wins playing first on n, and let y ∈ SL be a winning move for
Left. Then Left wins playing second on n − y, and using the induction hypothesis, she
wins playing second on n − y + d + x. Consequently, y is a winning move for Left on
n + d + x.

For i ≥ 0, denote by Xi the set of integers k < d+x2 such that the position i(d+x2 )+k
is ℒ for Gd . To prove the theorem, it suffices to show that if i is large enough, then
Xi = [0, x2 + d[. From the claim above we know that Xi ⊆ Xi+1 .
Additionally, using the hypothesis on d, we have that [p + 1, d − 1] ⊆ X0 . Finally, we
have the following property. For any x ≥ 0, if x ∈ Xi , then x − (x2 − x1 ) mod (d + x2 ) ∈
Xi+1 . Indeed, if x ∈ Xi , then i(d + x2 ) + x is an ℒ-position, and using the claim above, so
is i(d + x2 ) + x + d + x1 = (i + 1)(d + x2 ) + x − (x2 − x1 ).
Partizan subtraction games | 129

Let 0 ≤ x < d + x2 , and write (d − x) mod (d + x2 ) = α(x2 − x1 ) + β the Euclidian


division of (d − x) mod (d + x2 ) by (x2 − x1 ). We have 0 < β ≤ x2 − x1 and α ≤ ⌈ xd+x 2
⌉.
2 −x1
This can be rewritten as

x = (d − β) − α(x2 − x1 ) mod (d + x2 ).

Since we know that d − β ≥ p by the assumption on d, we have that (d − β) ∈ X0 ,


and using the observation above, this implies that x ∈ Xα ⊆ X⌈ d+x2 ⌉ .
x2 −x1

Consequently, Gd is ultimately ℒ, and the preperiod is at most (d + x2 )⌈ xd+x


−x
2
⌉.
2 1

Applying iteratively Theorem 11 with a game that is 𝒮𝒟 for Left (like the game of
Example 2), we obtain the following corollary.

Corollary 13. There are sets Sℒ and Sℛ with |Sℒ | = 2 and |Sℛ | arbitrarily large such that
(Sℒ , Sℛ ) is 𝒮𝒟 for Left.

Remark 14. The condition on d ≥ p + max(SR ∪ {x2 − x1 }) in Theorem 11 is optimal.


Indeed, take Sℒ = {c, c + 1} and Sℛ = {1}. As seen in the proof of Proposition 8, the
game (Sℒ , Sℛ ) is 𝒮𝒟 for Left, with preperiod the Frobenius number of {c + 1, c + 2},
which is p = c2 + 2c − 1 = c(c + 1) − 1. Thus, by Theorem 11, the game ({c, c + 1}, {1, d})
with d > c(c + 1) is also 𝒮𝒟 for Left. However, as proved in Example 5, this is not true
for d = c(c + 1) since then the game is ℱ .

3.2 The case |Sℛ | ≤ |Sℒ |


We first consider the case Sℒ = {1, . . . , k} and prove that the game is always favorable
to Left and that Sℒ strongly dominates in all but a few cases.

Lemma 15. Let Sℒ = {1, . . . , k}, and let |Sℛ | = k. Then:


1. If Sℛ = {c + 1, c + 2, . . . c + k} for some integer c, then Left weakly dominates if c > 0,
and the game is impartial if c = 0,
2. otherwise, Left strongly dominates.

Proof.
1. In this case, the game is purely periodic with period 𝒫ℒc 𝒩 k . This can be proved
by induction on the size of the heap n. If 0 < n ≤ c, then only Left can play, and
the game is trivially ℒ. Otherwise, let x = n mod c + k + 1. If x = 0, then if the first
player removes i tokens, then the second player answers by removing c + k + 1 − i
tokens, leading to the position n − c − k − 1, which is 𝒫 by induction, and so is n.
If 0 < x < c + 1, then when Left starts, she takes one token, leading to an ℒ- or
𝒫 -position, and wins. If she is second, then she plays as before to n − c − k − 1,
which is an ℒ-position. Finally, if x ≥ c + 1, then both players win playing first by
playing x − c for Left and x for Right.
130 | E. Duchêne et al.

2. We show that if n > 0 is such that Right wins playing second on n, then this implies
that Sℛ contains k consecutive integers. Let n0 be the smallest positive integer
such that oL (no ) = R. We know that n0 > k, since otherwise Left can win playing
first by playing to zero. Since Right has a winning strategy playing second, he has a
winning first move on all the positions n−i for 1 ≤ i ≤ k. This means that for each of
these positions, Right has a winning move to some position mi where oL (mi ) = R.
By the minimality of n0 this implies that mi = 0, and consequently n − i ∈ Sℛ
for all 1 ≤ i ≤ k. Consequently, if Sℛ does not contain k consecutive integers, then
there is no position n > 0 such that Right wins playing second. In particular, there
are neither ℛ- nor 𝒫 -positions in the period. By Remark 4 this implies that the
period only contains ℒ-positions, meaning that the game is strongly dominating
for Left.

The set Sℒ = {1, . . . , k} is somehow optimal for Left, since the exceptions of strongly
domination for Left in the previous lemma appear for any set of k elements:

Lemma 16. For any set Sℒ , there is a set Sℛ with |Sℛ | = |Sℒ | and Sℛ ∩ Sℒ = 0 such that
Left does not strongly dominate.

Proof. Let Sℛ = n0 − Sℒ for an integer n0 larger than all the values of Sℒ and such that
Sℛ ∩ Sℒ = 0. Then Right wins playing second in all the multiples of n0 .

4 When one set has size 1


We now consider the case where one of the sets, say Sℛ , has size 1. As seen in Section 2,
the study of the game is closely related to Unbounded Knapsack Problem and to the
coin problem. Indeed, Right has no any choice, and thus the result is only depending
on the possibility or not for n to be decomposed as a combination of the values in
Sℒ + Sℛ . Our aim in this section is to exhibit the precise periods.

4.1 Case |Sℒ | = |Sℛ | = 1


In this really particular case, the game is always 𝒲𝒟 for the player that has the small-
est integer.

Lemma 17. Let SL = {a} and SR = {b} with a < b. The outcome sequence of S = (SL , SR )
is purely periodic, the period length is a + b, and the period is 𝒫 a ℒb−a 𝒩 a . In particular,
the game is weakly dominating for Left.

Proof. We prove that for all n ≥ 0, if one of the players has a winning move playing
first (resp., second) on n, then he also has one playing first (resp., second) on n + a + b.
Indeed, suppose, for example, that Left has a winning move on position n playing first
Partizan subtraction games | 131

(the other cases are treated in the same way). If Left plays first on position n + a + b,
then after two moves, it is again Left’s turn to play, and the position is now n, and Left
wins the game.
The result then follows from computing the outcome of the positions n ≤ a + b.
These outcomes are tabulated in Table 8.1.

Table 8.1: Outcomes with SL = {a} and SR = {b} for first values.

Heap sizes Left move range Right move range Outcome

[0, a − 1] no moves no moves 𝒫


[a, b − 1] [0, b − a − 1] no moves ℒ
[b, b + a − 1] [b − a, b − 1] [0, a − 1] 𝒩
[b + a, b + 2a − 1] [b, b + a − 1] [a, 2a − 1] 𝒫
[b + 2a, 2b + 2a − 1] [b + a, 2b + a − 1] [2a, b + 2a − 1] ℒ

4.2 Case |Sℒ | = 2 and |Sℛ | = 1


In these cases, we are able to give the complete periods.

Theorem 18. Let a, b, and c be three positive integers, and let g = gcd(a + c, b + c). The
game ({a, b}, {c}) is:
– strongly dominated by Left if g ≤ c,
– weakly dominated by Left with period (𝒫 g−c ℒ2c−g 𝒩 g−c ) if c < g < 2c,
– ultimately impartial with period (𝒫 c 𝒩 c ) if g = 2c,
– weakly dominated by Right with period (𝒫 c ℛg−2c 𝒩 c ) if g > 2c.

Proof. Throughout this proof, we write n = qg + r with 0 ≤ r < c.


We start by proving the following claim, which holds in all four cases.

Claim 19. If (n mod g) < c, then oR (n) = L for large enough n.

Proof. After both players play once, the number of tokens decreases by either a + c
or b + c, depending on which move Left played. By the results on the coin problem
we know that if q is large enough, then qg can be written as α(a + c) + β(b + c) with
nonnegative integers α and β. If Left plays second, then a strategy can be to play a α
times and b β times. After these moves, it is Right’s turn to play, and the position is
r < c. Consequently, Right now has no move and loses the game.

We will now use this claim to prove the result in four different cases.
For the first case, we have g ≤ c. For any integer n, we have (n mod g) < g ≤ c.
Consequently, by Claim 19 there is an integer n0 such that for any n ≥ n0 , oR (n) = L.
This also implies that for any n ≥ n0 + a, 0L (n) = L since she plays to n − a > n0 , and,
by the claim, oR (n − a) = L. Thus the outcome is ℒ for any position n large enough.
132 | E. Duchêne et al.

For the three remaining cases, we will show that the following four properties hold
when n is large enough. The result of the theorem immediately follows from these four
properties.
1. if r < c, then Left wins playing second,
2. if r ≥ g − c, then Left wins playing first,
3. if r ≥ c, then Right wins playing first,
4. if r < g − c, then Right wins playing second.

We now prove these four points.


1. This point is exactly the claim above.
2. If r ≥ g − c and n is large enough, then Left can play a. The position after the move
is such that n − a ≡ r − a ≡ r + c mod g. Moreover, since g − c ≤ r < g, we know
that g ≤ r + c < g + c. From item 1 we know that oR (n) = L if n − a is large enough,
so Left has a winning strategy as a first player if r ≥ g − c.
3. If r ≥ c and Right plays first, then whatever Left plays, after an even number of
moves, Right still has a move available. Indeed, let n′ be the position reached after
an even number of moves. The number of tokens removed, n−n′ , is a multiple of g.
Consequently, n′ = (n mod g). Since (n mod g) ≥ c, this implies that n′ ≥ c,
and Right can play c. This proves that Right will never be blocked, and Left will
eventually lose the game.
4. Finally, if r < g − c, then Left playing first can move to a position n′ equal to either
n − a or n − b. Since a ≡ b ≡ −c mod g, in both cases, we have n′ ≡ r + c mod g.
Since c ≤ r + c < g, by the argument above we know that Right playing first on n′
wins. Consequently, Left playing first on n loses.

When c > b and b ≥ 2a, which is included in the first case, we know the whole
outcome sequence. This will be useful in the next section.

Theorem 20. The outcome sequence of the game ({a, b}, {c}) with c > b and b ≥ 2a is
the following:

a c−a a ∞
𝒫 ℒ 𝒩 ℒ .

Proof. We show the result by induction on n, the position of the game.


– If n < a, then neither player has a move, and thus o(n) = 𝒫 .
– If a ≤ n < c, then only Left has a valid move, and thus o(n) = ℒ.
– If c ≤ n < a + c, then Right has a winning move to a position n − c ≤ a, which has
outcome 𝒫 , and Left has a winning move to a position with outcome either 𝒫 or
ℒ, and thus o(n) = ℒ.
– Finally, if n ≥ a + c, then Right has no winning move, and Left has at least one
winning move. Indeed, since k ≥ a, we cannot have at the same time n − a and
n − a − k in the interval [c, a + c[. So at least one of n − a and n − a − k is not in this
interval and is either a 𝒫 -position or an ℒ-position by induction.
Partizan subtraction games | 133

Figure 8.1: Properties of the outcome sequences for G = ({a, a + k}, {c, d}). The parameters a and k
are fixed, and the pictures are obtained by varying the parameters c and d. The point at coordinate
(c, d) is blue if the corresponding game is eventually ℒ, red if it is eventually ℛ, and green if there is
a mixed period.

5 When both sets have size 2


The goal of this section is to investigate the sequence of outcomes for the game G =
(Sℒ , Sℛ ) with SL = {a, b} and SR = {c, d}. In particular, if we suppose that a and b are
fixed, then we would like to characterize for which positions the game G is eventu-
ally ℒ. The picture on Figure 8.1 gives an insight of what is happening. On the figure
on the left, we have an example with b ≥ 2a. In this case, the game G is almost al-
ways eventually ℒ, except when the point (c, d) is close to the diagonal, i. e., when
|d − c| is close to zero. When (c, d) is close to the diagonal, the behavior seems more
complicated, and we will not give a characterization here.
When b < 2a, the behavior is more complicated but shares some similarities with
the previous case. From the picture on the right in Figure 8.1 we can see that there are
some lines such that if the point (c, d) is far enough from these lines, then the game
is eventually ℒ. Again, when the point is close to these lines, the behavior is more
complex, and we will not try to characterize it here. In all cases, we can see that if a
and b are fixed, then for almost all of the choices of c and d, Left dominates.
In the rest of this section, we will assume that d > c > b. We start by the case
b ≥ 2a, which is easier to analyze.

5.1 Case b ≥ 2a
We start by the case where b ≥ a and show that in this case, G is ultimately ℒ if (c, d)
is far enough from the diagonal.
134 | E. Duchêne et al.

Theorem 21. If b ≥ 2a and d > c + b, then Sℒ ≻ Sℛ . More precisely, the outcome


sequence is

a c−a a d−c−a a ∞
𝒫 ℒ 𝒩 ℒ 𝒩 ℒ .

Proof. Again, we will show this result by induction on n, the starting position of the
game. Let G′ be the game ({a, a + k}, {c}). If n < d, then G played on n has the same
outcome as G′ , since playing d is not a valid move for Right in this case. Consequently,
we can just apply Theorem 20, and get the desired result. Otherwise, there are two
possible cases:
– If d ≤ n < d + a, then Right has a winning move to the position n − d < a, and
Left has a winning move by playing his strategy for the game G′ on n. Indeed, this
leads to a position n − x < d for some x ∈ {a, a + k} with outcome either 𝒫 or ℒ for
G′ and, consequently, also for G, since d cannot be played anymore at this point.
– If n ≥ d + a, then denote by I1 and I2 the two intervals containing the 𝒩 -position,
i. e., I1 = [c, c + a[, and I2 = [d, d + a[. Since k ≥ a, we cannot have that n − a and
n − a − k are both in I1 or both in I2 . Additionally, since d > c + a + k, we cannot
have both n − a − k ∈ I1 and n − a ∈ I2 at the same time. Consequently, one of n − a
and n − a − k has outcome either ℒ or 𝒫 , and Left has a winning move on n.

5.2 General case


In the general case, we will again prove that if we fix a and b, then for most choices of
c and d, the outcome is ultimately ℒ. The exceptional cases are slightly more compli-
cated to characterize. The characterization is related to the following definition.

Definition 22. Given an integer a and a real number α ≥ 1, we denote by Ta,α the set
of points defined as follows:
– T0,α = {(c, d) : gcd(c, d) ≥ max(c,d)
α
};
– for a ≥ 1, Ta,α is obtained from T0,α by a translation of vector (−a, −a).

We can remark that for any α and β with β ≥ α, we have T0,α ⊆ T0,β . We now prove
some properties of the sets Ta,α , which will be useful for the proofs later on.

Lemma 23. Assume that there are some positive integers x, y, u, and v such that xu−yv =
0 with (u, v) ≠ (0, 0). Then (x, y) ∈ T0,max(u,v) .

Proof. Up to dividing u and v by gcd(u, v), we can assume that u and v are coprime.
Then the equation is xu = yv. Consequently, u is a divisor of yv, and since u and v are
coprimes, this means that u is a divisor of v. We can write y = gu, and, consequently,
we have xu = yv = vgu. This means that x = vg and g = gcd(x, y). Consequently,
max(x,y)
gcd(x,y)
= max(u, v), and (x, y) ∈ T0,max(u,v) .
Partizan subtraction games | 135

Given two points p = (x, y) and p′ = (x′ , y′ ), we denote by d(p, p′ ) the distance
between these two points according to the 1-norm: d(p, p′ ) = |x − x ′ | + |y − y′ |. If 𝒟 is
a subset of ℕ2 , then we denote by d(p, 𝒟) = min{d(p, p′′ ), p′′ ∈ 𝒟} the distance of the
point p to the set 𝒟.

Lemma 24. Let x, y, u, v, and a be positive integers such that |xu − yv| ≤ a. Then
d((x, y), T0,max(u,v) ) ≤ a(u + v).

Proof. Let r = xu − yv with |r| ≤ a, and let g = gcd(u, v). By definition, r is a multiple
of g, and we can write r = qg for some integer q. Additionally, by Bézout’s identity
we know that there exist two integers u′ and v′ such that uu′ + vv′ = g, |u′ | ≤ u, and
|v′ | ≤ v. Consider the point (x′ , y′ ) with x′ = x − qu′ and y′ = y + qv′ . We have the
following:

x′ u − y′ v = xu + yv − q(uu′ + vv′ ) = r − qg = 0.

By Lemma 23 we know that (x′ , y′ ) ∈ T0,max(u,v) . Additionally, d((x, y), (x′ , y′ )) = |qu′ | +
|qv′ | ≤ |r|(u + v) ≤ a(u + v). This proves the lemma.
For any a and α, the set Ta,α satisfies the following properties.

Lemma 25. For any a and α, the set Ta,α is the union of a finite set of lines.

Proof. Since Ta,α can be obtained from T0,α by a translation, we only need to prove the
result in the case a = 0. Let 𝒟 be the union of the lines with equation xu − yv = 0 for
all u, v ≤ α. The set 𝒟 is the union of a finite number of lines. By Lemma 23 we know
that 𝒟 ⊆ T0,α . Reciprocally, let (x, y) be a point in T0,α , and let g = gcd(x, y). We can
write x = x′ g and y = y′ g for some integers x′ and y′ . We have the following:

xy′ − yx′ = x′ y′ g − y′ gx = 0.

Additionally, we have x′ = x
g
≤ x max(x,y)
α
≤ α, and similarly for y′ . Consequently, (x, y) ∈
𝒟, and T0,α = 𝒟.

The goal in the remaining of this section is to prove the following theorem.
a
Theorem 26. Let a, b, c, and d be positive integers, and let A = ⌈ b−a ⌉ + 1. Assume that
d((c, d), Ta,A ) ≥ 2A(a + 2b). Then the partizan subtraction game with SL = {a, b}, and
SR = {c, d} is ultimately ℒ.

Given two integers i and j, we define the following intervals:


– Ii,j
𝒫
= [αi,j , αi,j + a − (i + j)(b − a)[, and
– Ii,j
𝒩
= [βi,j , βi,j + a − (i + j − 1)(b − a)[,

where
– αi,j = i(d + b) + j(c + b),
– and βi,j = αi,j − b.
136 | E. Duchêne et al.

Denote by I 𝒫 the set ∪i,j Ii,j


𝒫
and, similarly, I 𝒩 = ∪i,j Ii,j
𝒩
. Note that Ii,j
𝒫
is empty if i+j ≥ ⌈ ak ⌉
a
and Ii,j is empty if i + j ≥ ⌈ k ⌉ + 1. Our goal is to show that under the conditions in the
𝒩

statement of the theorem, the set I 𝒩 is the set of 𝒩 -positions, I 𝒫 the set of 𝒫 -positions,
and all the other positions have outcome ℒ. In particular, since both I 𝒫 and I 𝒩 are
finite, this implies that the outcome sequence is eventually ℒ. Before showing this,
we prove that under the conditions of the theorem, the intervals Ii,j 𝒫
and Ii,j
𝒩
satisfy the
following properties.
a
Lemma 27. Fix the parameters a and b, and let A = ⌈ b−a ⌉ + 1. Assume that c and d are
such that d((c, d), Tb,A ) ≥ 2A(a + 2b). Then the intervals Ii,j and Ii,j
𝒩 𝒫
satisfy the following
properties:
(i) they are pairwise disjoint,
(ii) there is no interval Ii𝒫′ ,j′ or Ii𝒩
′ ,j′ intersecting any of the b positions preceding Ii,j ,
𝒩

(iii) Ii,j
𝒫
+ c = Ii,j+1
𝒩
,
(iv) Ii,j + d = Ii+1,j ,
𝒫 𝒩

(v) (Ii,j𝒩
+ a) ∩ (Ii,j
𝒩
+ b) = Ii,j
𝒫
.

Proof. The points (iii), (iv), and (v) are just consequences of the definitions of Ii,j 𝒫

and Ii,j
𝒩
. Consequently, we only need to prove the two other points.
a
We know that Ii,j
𝒩
and Ii,j
𝒫
are empty when i + j ≥ ⌈ b−a ⌉ + 1 = A. Consequently, we
will further assume that the indices i, j, i , and j are all upper bounded by A. We first
′ ′

show the following claim. The rest of the proof simply consists in applying this claim
several times.

Claim 28. Assume that there are an integer B and indices i, j, i′ , j′ ≤ A such that one of
the following holds:
– |αi,j − αi′ ,j′ | ≤ B,
– |βi,j − βi′ ,j′ | ≤ B,
– |αi,j − βi′ ,j′ | ≤ B.

Then in all three cases, we have d((c, d), Tb,A ) ≤ 2A(B + b).

Proof. The first two cases are equivalent to the inequality |(i − i′ )(d + b) + (j − j′ )(c +
b)| ≤ B, and the result follows by applying Lemma 24. The third case is equivalent
to |(i − i′ )(d + b) + (j − j′ )(c + b) + b| ≤ B. Using the triangle inequality, this implies
|(i − i′ )(d + b) + (j − j′ )(c + b)| ≤ B + b, and the result follows from Lemma 24.

We will prove the points (i) and (ii) by proving their contrapositives. In other
words, assuming that one of these two conditions does not hold, we want to show
that d((c, d), Tb,α ) ≤ 2α(a + b).
We first consider the point (i). First, assume that there are two intersecting inter-
vals Ii,j
𝒫
and Ii𝒫′ ,j′ . Then the Left endpoint of one of these two intervals is contained in
Partizan subtraction games | 137

the other. Without loss of generality, we can assume that αi,j ∈ Ii𝒫′ ,j′ . This implies that

αi′ ,j′ ≤ αi,j ≤ αi′ ,j′ + a − (b − a)(i′ + j′ ),


0 ≤ αi,j − αi′ ,j′ ≤ a − (b − a)(i′ + j′ ) ≤ a.

By claim 28 this implies that d((c, d), Tb,A ) ≤ 2A(a + b).


Similarly, if we assume that Ii,j
𝒩
and Ii𝒩 ′ ,j′ intersect, then this implies without loss

of generality that βi,j ∈ Ii𝒩 ′ ,j′ , and, consequently, 0 ≤ βi,j − βi′ ,j′ ≤ a − (i + j − 1)k ≤ a.

Again, using Claim 28, this implies d((c, d), Tb,A ) ≤ 2(a + b)A.
Finally, if Ii𝒩
′ ,j′ and Ii,j intersect, then either 0 ≤ αi,j − βi′ ,j′ ≤ a if αi,j ∈ Ii′ ,j′ , or
𝒫 𝒩

0 ≤ βi′ ,j′ − αi,j ≤ a if βi′ ,j′ ∈ Ii,j


𝒫
. In both cases, Claim 28 gives the desired result.
The proof for the point (ii) is essentially the same as above. If Ii𝒩 ′ ,j′ intersects one

of the b positions preceding Ii,j


𝒩
, then we have two inequalities

βi′ ,j′ + a − (i′ + j′ − 1)k ≥ βi,j − b, βi′ ,j′ ≤ βi,j .

From these inequalities we can immediately deduce −a − b ≤ βi′ ,j′ − βi,j ≤ 0. The
inequality d((c, d), Ta+k,A ) ≤ 2A(3a + 2k) follows immediately from Claim 28. Similarly,
if the interval Ii𝒫′ ,j′ intersects one of the b positions preceding Ii,j
𝒩
, then we have two
inequalities

αi′ ,j′ + a − (i′ + j′ )(b − a) ≥ βi,j − b, αi′ ,j′ ≤ βi,j .

This implies −(a + b) ≤ αi′ ,j′ − βi,j ≤ 0, and again the result holds by Claim 28.

We now have all the tools needed to prove the theorem.


a
Proof of Theorem 26. Let a, b, c, d be integers, let A = ⌈ b−a ⌉, and assume that
d((c, d), Tb,A ) ≥ 2A(a + 2b). We know that the four properties of Lemma 27 hold.
We will show by induction on n that for any position n ≥ 0, if n ∈ I 𝒫 , then n is
a 𝒫 -position, if n ∈ I 𝒩 , then it is an 𝒩 -position, and otherwise it is an ℒ-position. The
inductive case is treated in the same way as the base case.
First, assume that n ∈ Ii,j 𝒩
for some indices i and j such that i + j ≥ 1. Left has
a winning move by playing a. Indeed, the interval Ii,j 𝒩
has length at most a, and using
condition(ii), by Lemma 27 and the induction hypothesis, n − a is an ℒ-position. If
i > 0, then Right playing c leads to the position n − c ∈ Ii−1,j 𝒫
by condition (iii). This
position is a 𝒫 -position using the induction hypothesis. If j > 0, then similarly, Right
can play d and put the game in the position n − d ∈ Ii,j−1 𝒫
by condition (iv). This position
is a 𝒫 -position using the induction hypothesis.
Suppose now that n ∈ Ii,j 𝒫
. If i and j are both zero, then neither player has any move,
and n is a 𝒫 -position. Otherwise, if Left plays either a or b, then this leads to a position
n′ ∈ Ii,j
𝒩
by condition (v). Using the induction hypothesis, n′ is an 𝒩 -position, and Left
has no winning move. Right’s only possible winning move would be to a 𝒫 -position n′ .
138 | E. Duchêne et al.

Using the induction hypothesis, this means that n′ ∈ I 𝒫 . However, this would mean
by conditions (iii) and (iv) that n ∈ I 𝒩 , which is a contradiction of the property (i) that
I 𝒩 and I 𝒫 are disjoint. Consequently, Right has no winning move.
Finally, suppose that n ∈ ̸ I 𝒫 ∪ I 𝒩 . We will show that Left has a winning move on n
and Right does not. Since I0,0𝒫
= [0, a[, we can assume that n ≥ a, and Left can play a.
Suppose that Left’s move to n − a is not a winning move, and let us show that Left has
a winning move to n − a − k. Since Left’s move to n − a is not a winning move, this
means that n − a ∈ Ii,j
𝒩
for some integer i, j with i + j ≥ 1. Consequently, we have n ≥ b,
and playing b is a valid move for Left. By condition (v) we cannot have n − b ∈ Ii,j 𝒩
,
since otherwise we would have n ∈ Ii,j . Moreover, we cannot have either n − b ∈ Ii′ ,j′ for
𝒫 𝒩

some (i′ , j′ ) ≠ (i, j), since it would contradict condition (ii). Consequently, n − b ∈ I ℒ ,
and using the induction hypothesis, this is a winning move for Left. The only possible
winning move for Right would be to play to a position n′ that is a 𝒫 -position. Using
the induction hypothesis, this means that n′ ∈ I 𝒫 . However, using the conditions (iii)
and (iv), this would also imply n ∈ I 𝒩 , a contradiction.

Corollary 29. Under the conditions of the theorem, the game G is ultimately ℒ.

Proof. Since Ii,j


𝒩
and Ii,j
𝒫
are both empty if i + j > a, the two sets I ℒ and I 𝒩 are finite,
and the result follows from the theorem.

Bibliography
[1] E. R. Berlekamp, J. H. Conway, and R. K. Guy, Winning Ways for Your Mathematical Plays, Vol. 1,
A K Peters, Ltd., New York, 2001.
[2] A. S. Fraenkel and A. Kotzig, Partizan octal games: partizan subtraction games, Internat. J. Game
Theory 16(2) (1987), 145–154.
[3] U. Larsson, N. A. McKay, R. J. Nowakowski, and A. A. Siegel, Wythoff partizan subtraction,
Internat. J Game Theory 47(2) (2018), 613–652.
[4] G. A. Mesdal III, Partizan Splittles, Games of No Chance 3, MSRI Publications 56, 2003.
[5] T. Plambeck, Notes on Partizan Subtraction Games, working notes.
[6] J. Ramírez Alfonsín, Complexity of the Frobenius problem, Combinatorica 16 (1996), 143–147.
[7] A. N. Siegel, Combinatorial Game Theory, American Mathematical Society, San Francisco, CA,
2013.
[8] J. J. Sylvester, Mathematical questions, with their solutions, Educational Times 41 (1884), 21.
[9] G. S. Lueker, Two NP-Complete Problems in Nonnegative Integer Programming, Report No. 178,
Computer Science Laboratory, Princeton University (1975).
Matthieu Dufour, Silvia Heubach, and Anh Vo
Circular Nim games CN(7, 4)
Abstract: Circular Nim is a two-player impartial combinatorial game consisting of n
stacks of tokens placed in a circle. A move consists of choosing k consecutive stacks
and taking at least one token from one or more of the stacks. The last player able to
make a move wins. The question of interest is: Who can win from a given position
if both players play optimally? This question is answered by determining the set of
𝒫 -positions from which the next player is bound to lose, no matter what moves the
player makes. We will completely characterize the set of 𝒫 -positions for n = 7 and
k = 4, adding to the known results for other games in this family. The interesting
feature of the set of 𝒫 -positions of this game is that it splits into different subsets,
unlike the structures for the previously solved games in this family.

1 Introduction
The game of Nim has been played since ancient times, and the earliest European ref-
erences to Nim are from the beginning of the sixteenth century. Its current name was
coined by Charles L. Bouton of Harvard University, who also developed the complete
theory of the game in 1902 [3]. Nim plays a central role among impartial games, as any
such game is equivalent to a Nim stack [2]. Many variations and generalizations of Nim
have been analyzed. They include subtraction games, Wythoff’s game, Nim on graphs
and on simplicial complexes, Take-away games, Fibonacci Nim, etc. [1, 5, 6, 8, 9, 10,
11, 12, 13, 15, 16]. We will study a particular case of another variation, called Circular
Nim, which was introduced in [4]. This game imposes a geometric structure on Nim
heaps, which gives rise to interesting features in the set of 𝒫 -positions.

Definition 1. In Circular Nim, n stacks of tokens are arranged in a circle. A move con-
sists of choosing k consecutive stacks and then removing at least one token from at
least one of the k stacks. Players alternate moves, and the last player who is able to
make a legal move wins. We denote this game by CN(n, k). A position in CN(n, k) is rep-
resented by the vector p = (p1 , p2 , . . . , pn ) of nonnegative entries indicating the heights
of the stacks in order around the circle. An option of p is a position to which there
is a legal move from p. We denote an option of p by p′ = (p′1 , p′2 , . . . , p′n ) and use the
notation p → p′ to denote a legal move from p to p′ .

Matthieu Dufour, Department of Mathematics, UQAM, Montreal, Quebec, Canada, e-mail:


dufour.matthieu@uqam.ca
Silvia Heubach, Department of Mathematics, Cal State LA, Los Angeles, California, USA, e-mail:
sheubac@calstatela.edu
Anh Vo, Department of Mathematics, CSU Dominguez Hills, Carson, California, USA, e-mail:
anhvo1979@gmail.com

https://doi.org/10.1515/9783110755411-009
140 | M. Dufour et al.

Note that a position in Circular Nim is determined only up to rotational symme-


try and reflection (reading the position forward or backward). The only terminal po-
sition of CN(n, k) is 0 := (0, 0, . . . , 0) for all n and k. Figure 9.1 shows an example of
the position p = (1, 7, 5, 6, 2, 3, 6) ∈ CN(7, 4) and one possible move to option p′ =
(0, 1, 5, 4, 2, 3, 6), where the four stacks enclosed by squares are the stacks selected for
play. Note that no tokens were taken from the stack of height 5. We will use play on a
stack to mean that the stack is one of the selected ones, whether actual tokens were
removed or not.

Figure 9.1: A move from p = (1, 7, 5, 6, 2, 3, 6) to p′ = (0, 1, 5, 4, 2, 3, 6).

Circular Nim is an example of an impartial combinatorial game, one for which both
players have the same moves. For impartial games, there are only two types of posi-
tions (= outcomes classes). The outcome classes are described from the standpoint
of which player will win when playing from the given position. An 𝒩 -position in-
dicates that the Next player to play from the current position can win, whereas a
𝒫 -position indicates that the Previous player, the one who made the move to the
current position, is the one to win. Thus the current player is bound to lose from this
position, no matter what moves she or he makes. A winning strategy for a player in an
𝒩 -position is to move to one of the 𝒫 -positions. More background on combinatorial
games can be found in [1, 2]. An extensive bibliography on impartial games can be
found in [7].
Dufour and Heubach [4] proved general results on the set of 𝒫 -positions of
CN(n, 1), CN(n, n), and CN(n, n − 1) for all n. These general cases cover all games
for n ≤ 3. They also gave the results for all games with n ≤ 6, except for CN(6, 2),
and also solved the game CN(8, 6). In this paper, the main result is on the 𝒫 -positions
for CN(7, 4). One sign of the increase in complexity as n and k increase is that, un-
like in the results for the cases already proved, we no longer can describe the set of
𝒫 -positions as a single set, which makes the proofs more complicated.
To prove our main result, we use the following theorem.

Theorem 1 ([1, Theorem 2.13]). Suppose the positions of a finite impartial game can be
partitioned into mutually exclusive sets A and B with the following properties:
Circular Nim games CN(7, 4) | 141

I. Every option of a position in A is in B, and


II. Every position in B has at least one option in A.

Then A is the set of 𝒫 -positions, and B is the set of 𝒩 -positions.

2 The game CN(7, 4)


In the discussion of CN(7, 4), we will use the generic position p = (a, b, c, d, e, f , g).
Since positions of CN(7, 4) are only determined up to rotation and reflection, we will
assume without loss of generality that in a generic position, a is a minimum. Figure 9.2
shows a generic position (a, b, c, d, e, f , g), where the minimum stack is rendered in red
(gray). Note that to avoid cumbersome notation, we will use the label, say a, to refer
to either the stack itself or to its number of tokens; which one it is will be clear from
the context.

Figure 9.2: A generic position in the game CN(7, 4) with a = min(p).

Here is our main result, with a visualization of the 𝒫 -positions of CN(7, 4) given in
Figure 9.3. In this figure, we highlight the sum conditions by encircling stacks whose
sums have to be equal in dark blue and color any stacks that equal the sums in the
same color. Pairs of stacks that also have the same sum, but for which this is true due
to some symmetry, are encircled in lighter blue.

Theorem 2. Let p = (a, b, c, d, e, f , g) with a = min(p). The 𝒫 -positions of CN(7, 4) are


given by S = S1 ∪ S2 ∪ S3 ∪ S4 , where:
– S1 = {p | a = b = 0, c = g > 0, d + e + f = c},
– S2 = {p | p = (a, a, a, a, a, a, a)},
– S3 = {p | a = b, c = g, d = f , a + c = d + e, 0 < a < e}, and
– S4 = {p | a = f , b + c = d + e = g + a, a < min{b, e}, a < max{c, d}}.
142 | M. Dufour et al.

Figure 9.3: Visualization of the 𝒫-positions of CN(7, 4).

Note that all the subsets of S are disjoint. The condition a < max{c, d} of S4 prohibits
a pair of adjacent minima, which all other sets have. Also, S2 is disjoint from the other
sets because they all have a strict inequality condition. Finally, S1 ∩ S3 = ⌀ since a > 0
for S3 .
The following definitions and remarks will aid us in the proofs of our results. Note
that we assume a to be the minimum, not necessarily unique. In the proofs, we will
typically denote the minimal and maximal values of a target position p’ by m and M,
respectively.

Definition 2. A tub configuration xaax is a set of four adjacent stacks that consists of
a pair of adjacent minima (of the position) surrounded by two stacks of equal height.
There are three other stacks in the position, which we denote by x1 x2 x3 unless we know
the actual stack heights. A peak configuration is of the form xXx, a set of three adja-
cent stacks with x < X. If x and X are the minimum and the maximum, respectively,
of the position, then we call this configuration a minmax peak. A position with a peak
contains four other stacks, which we denote by x1 x2 x3 x4 unless we know the actual
stack heights. Finally, a position has the common sum requirement if consecutive dis-
joint pairs of adjacent stacks have the same sum, with one exceptional overlap stack
contributing to two sums.

With these definitions, we can make the following remarks regarding the specific
features of each subset of S.

Remark 1.
(1) In S3 , a < e, and the sum conditions imply that c > max{a, d} and c ≥ e.
(2) In S4 , we have the following inequalities: a < min{b, e} implies that g > max{c, d}
due to the common sum requirement. Furthermore, g > a.
(3) Positions in S1 ∪ S3 contain a tub configuration. We have
– p ∈ S1 needs to satisfy the trisum condition x1 + x2 + x3 = x,
– p ∈ S3 needs to satisfy x1 = x3 and x2 + x3 = a + x.
(4) When trying to move from a position p with a = min(p) > 0 to p′ ∈ S1 ∪ S3 , we
can create a tub configuration with minimum a′ < a by selecting a pair of stacks,
making them height a′ and then reducing the larger of the two stacks adjacent
to the pair of a′ -stacks to the height of the smaller one (if needed). This height
Circular Nim games CN(7, 4) | 143

gives the value of x in the tub configuration xa′ a′ x. Any remaining play has to
occur on x1 , the stack adjacent to the stack that was decreased to x. In labeling the
three nontub configuration stacks, we read p′ starting from the minima a′ in the
direction of the stack whose height was reduced to x. (If the two stacks adjacent to
the pair a′ a′ were already of equal height, then either one of the stacks adjacent to
the x stacks could play the role of x1 .) Note that we cannot play on the remaining
two stacks x2 and x3 , so we may not be able to meet the remaining conditions of
S1 or S3 , respectively.
(5) Positions in S4 always contain a minmax peak, whereas positions in S3 may con-
tain a peak. In either case the remaining four stacks have to satisfy that x1 + x2 =
x3 + x4 = x + X.
(6) Positions in S4 have either two or three minima. If c = a = m, then p = (m, M, m,
d, e, m, M), that is, two maxima alternate with three minima. Otherwise, the two
minima are separated by the maximum.
(7) The common sum requirement is a condition of S3 ∪ S4 and is trivially satisfied for
positions in S2 . It is relatively easy to see that we cannot make a move from a po-
sition p that satisfies the common sum requirement to p′ , which also satisfies the
common sum requirement if the location of the overlap stack remains the same.
In that case, at least one sum remains unchanged while at least one other sum is
decreased. Specifically, there is no move from S2 to S3 ∪ S4 .

We are now ready to embark on the proofs.

2.1 There is no move from p ∈ S to p′ ∈ S


Proposition 1. If p ∈ S, then p′ ∉ S.

Proof. To prove condition (I) of Theorem 1, we will use the equivalent statement that
there is no move from a 𝒫 -position to another 𝒫 -position. For each of the four subsets
of S, we consider moves to all the other sets. Note that the only terminal position is
0 ∈ S2 .

Moves from S1
We start with p = (0, c, d, e, f , c, 0) ∈ S1 with d + e + f = c. Note that we cannot move
to p′ ∈ S1 ∪ S2 because in either case, we would have to play on the five stacks cdefc
to simultaneously reduce the c stacks and the sum to a new value c′ < c in the case of
S1 and c′ = 0 in the case of S2 . A move to S3 is not possible since the minimum in S3 is
greater than zero. A move to p′ ∈ S4 is not possible because S4 does not have adjacent
minima by Remark 1(6). Thus no move is possible from S1 to S.
144 | M. Dufour et al.

Moves from S2
Now assume that p = (a, a, a, a, a, a, a) ∈ S2 with a > 0 because p is the terminal
position for a = 0. To move to S1 , we have to create a tub configuration of the form
x00x, which requires play on three stacks (even though we remove tokens from only
two stacks). We can at most reduce one of the three remaining stacks x1 x2 x3 = aaa, so
the sum x1 + x2 + x3 ≥ 2a, whereas x = a, so there is no move from S2 to S1 . Clearly, we
cannot move from S2 to S2 . By Remark 1(7) there is no move from S2 to S3 ∪ S4 .

Moves from S3
Let p = (a, a, c, d, e, d, c) ∈ S3 with a + c = d + e. To move to S1 ∪ S3 , we have to create
a tub configuration of the form xa′ a′ x, with a′ = 0 for p′ ∈ S1 and a′ ≤ a for p′ ∈ S3 .
First, we consider play when the minima a′ of p′ are located at the a stacks. For a move
to S1 , we play on both a stacks making them zero and then either reduce both c stacks
or one of the d stacks, but not both. In either case, we have that x ≤ c and the trisum
d′ + e + d ≥ d + e = a + c > c, so the trisum condition is not satisfied. For a move to S3 ,
the overlap stack remains at the same location, and by Remark 1(7) there is no move
to S3 .
Now we look at the cases where we create a tub configuration xa′ a′ x elsewhere.
In each case, we use play on three stacks as described in Remark 1(4). By symmetry of
positions in S3 we have to consider the three possibilities indicated in Figure 9.4a. They
are x = a with x1 x2 x3 = edc, x = a with x1 x2 x3 = dca, or x = d with x1 x2 x3 = aac (since
c > d by Remark 1(1), we read counterclockwise). By Remark 1(3) we need to satisfy the
conditions x1 + x2 + x3 = x + 0 = x + a′ for p ∈ S1 and both x1 = x3 and x2 + x3 = x + a′ for
p ∈ S3 . We will show that even if we reduce x1 to zero, then we will not be able to satisfy
the respective sum conditions. For x = a, x2 + x3 ≥ min{d + c, c + a} > a + a′ = x + a′ ,
and for x = d, x1 + x3 = a + c = d + e > d + a′ = x + a′ . Thus p′ ∉ S1 ∪ S3 . It is also not
possible to move to p′ ∈ S2 , since min{c, e} > a by Remark 1(1), so we would need to
play on five stacks to reduce cdedc to aaaaa.

Figure 9.4: Visualization of moves from S3 to (a) S1 ∪ S3 and (b) S4 .


Circular Nim games CN(7, 4) | 145

To show that we cannot move from S3 to S4 , we consider the possible locations of the
minmax peak of p′ . Due to the symmetry of positions in S3 , we need to consider the
four possible peak configurations shown in Figure 9.4b: a′ aa′ with sums d + e ≤ d + c,
a′ ca′ with sums e + d = c + a, a′ da′ with sums d + c > a + a, or a′ ea′ with sums c + a (in
both cases). Note that in the first three cases, we have a′ < a because the minimum of
the minmax peak in S4 has to be strictly less than the adjacent stacks, and in each of
these cases, the a stack is one of them. We can play on one more stack adjacent to the
a′ stacks, and we play on the stack that affects the larger sum. In the first two cases the
peak sum is smaller than the smaller of the two sums, and since we can adjust only
one sum, we cannot legally move to p′ ∈ S4 . For the third case, equality with the peak
sum requires that d′ + c = d + a′ , and hence d′ = d − c + a′ < a′ because c > d by
Remark 1(1). For the last case, the overlap stack is at the same location in p and p′ ,
so by Remark 1(7) we cannot adjust all four sums with play on only four stacks. This
shows that we cannot move to p′ ∈ S4 .

Moves from S4
Finally, we check whether we can move from p = (a, b, c, d, e, a, g) ∈ S4 to p′ ∈ S. The
approach is similar to that taken when p ∈ S3 . For a move to p′ ∈ S1 ∪ S3 , we once
more need to create a tub configuration xa′ a′ x, where a′ ≤ a, and a′ = 0 for moves
to S1 . Due to the semisymmetric nature of positions in S4 , we now need to consider
all seven placements of the new pair of minima. We start by putting them at stacks
a and b and get the following cases: x = c, x1 x2 x3 = aed (since we have to reduce
g), x = a, x1 x2 x3 = eag, x = min{b, e}, x1 x2 x3 = aga (no matter which side we need
to play on), x = a, x1 x2 x3 = bag (since we need to play on c), x = d, x1 x2 x3 = abc,
x = a, x1 x2 x3 = dcb, and x = a, x1 x2 x3 = cde.
First, we look at the cases where x = a. Reducing x1 to zero, we have that x2 + x3 =
a + g = c + b = e + d > a + a ≥ a + a′ , so the sum conditions of S1 and S3 are not satisfied.
Likewise, for x = c, we have that x2 + x3 = e + d = c + b > c + a ≥ c + a′ , and for x = d,
we obtain x2 + x3 = b + c = d + e > d + a ≥ d + a′ . Finally, for x = min{b, e}, we have that
x2 + x3 = g + a = min{b, e} + max{d, c} > min{b, e} + a ≥ min{b, e} + a′ , so we cannot
move to p′ ∈ S1 ∪ S3 .
Next, we look at moves from S4 to S2 . Since a < min{b, e, g}, we have to reduce at
least those three stacks to a, which requires play on five stacks. Therefore we cannot
move from S4 to S2 .
Finally, we look at moves from S4 to S4 . If we keep the location of the minima and
hence the overlap stack, then by Remark 1(7) there is no move to p′ ∈ S4 . Thus we
need to consider whether we can create a minmax peak a′ Xa′ with a′ < a and the
remaining stacks x1 x2 x3 x4 , which satisfy x1 + x2 = x3 + x4 = a′ + X by Remark 1(5).
We can play on either x1 or x4 , but in either case, we can only modify one of the two
sums x1 + x2 and x3 + x4 . The common sum for p is s = g + a, whereas for p′ , it is
s′ = X + a′ < s. Furthermore, x2 and x3 cannot be adjusted. Let us look at the possible
cases, going clockwise and starting with new minima at the g and b stacks, for a total
146 | M. Dufour et al.

of six cases: (1) X ≤ a and x1 x2 x3 x4 = cdea; (2) X ≤ b and x1 x2 x3 x4 = deag; (3) X ≤ c


and x1 x2 x3 x4 = eaga; (4) X ≤ d and x1 x2 x3 x4 = agab; (5) X ≤ e and x1 x2 x3 x4 = gabc;
and (6) X ≤ a and x1 x2 x3 x4 = abcd. In cases (1) and (3), x3 > X, whereas in cases (4)
and (6), x2 > X, either directly from the definition of positions in S4 or by Remark 1(2).
For the remaining two cases (2) and (5), we have that x1 + x2 = x3 + x4 = g + a = s > s′ ,
and we can adjust only one of the two sums. This shows that there is no move from S4
to S4 , and thus there is no move from S to S, completing the proof.

2.2 There always is a move from p ∈ S c to p′ ∈ S


We now show the second part of Theorem 1.

Proposition 2. If p ∈ Sc , then there is a move to p′ ∈ S.

To show that we can make a legal move from any position p ∈ Sc to a position
p′ ∈ S, we partition the set Sc according to the number of zeros of p and, for positions
without a zero stack, according to the locations of the maximal stacks. We will only
need to distinguish between the cases of exactly one zero and of at least two zeros. Note
that in [14], Sc was partitioned according to the exact number of minima of p. The proof
presented here is shorter and uses some of the ideas from [14], such as Definition 3
and Lemma 1. We call out these structures and CN(3, 2)-equivalence (defined below)
because they give insight into stack configurations from which it is easy to move to
𝒫 -positions.

Definition 3. A position p is called deep-valley if five consecutive stacks p1 p2 p3 p4 p5


satisfy p2 + p3 + p4 ≤ min{p1 , p5 }. It is called shallow-valley if p1 ≤ p5 and p2 + p3 ≤ p1 <
p2 + p3 + p4 .

Lemma 1 (Valley lemma). If p = (p1 , p2 , p3 , p4 , p5 , p6 , p7 ) is deep-valley with stacks


p1 p2 p3 p4 p5 and s = p2 + p3 + p4 , then there is a move to p′ = (s, p2 , p3 , p4 , s, 0, 0) ∈ S1 .
On the other hand, if p is shallow-valley with p1 p2 p3 p4 p5 , then there is a move to
p′ = (p1 , p2 , p3 , p1 − (p2 + p3 ), p1 , 0, 0) ∈ S1 .

Proof. If p is deep-valley, then p′1 = p′5 = p2 + p3 + p4 ≤ min{p1 , p5 }, so it follows


that p → p′ ∈ S1 is a legal move. If p is shallow-valley, then p′1 = p′5 = p1 ≤ p5 ,
p1 − (p2 + p3 ) ≥ 0, and p4 ≥ p′4 = p1 − (p2 + p3 ). Also, p1 − (p2 + p3 ) + p2 + p3 = p1 ,
p → p′ ∈ S1 is a legal move.
The notion of CN(3, 2)-equivalence comes into play when p contains zero stacks. It
builds on the structure of the 𝒫 -positions of CN(3, 2), which are those with equal stack
heights (see either [4] or easily convince yourself with a one-line proof). Note that the
following definition is not specific to the game CN(7, 4).
Circular Nim games CN(7, 4) | 147

Definition 4. A position p of a CN(n, k) game is CN(3, 2)-equivalent if the stacks of p can


be partitioned into (mutually exclusive) subsets A1 , A2 , and A3 together with a set (or
sets) of consecutive zero stacks, where A1 , A2 , and A3 satisfy the following conditions:
(1) Any pair of the three sets A1 , A2 , and A3 , together with any zero stacks that are
between them, are contained in k consecutive stacks;
(2) Any move that involves at least one stack from each of the three sets A1 , A2 , and
A3 requires play on at least k + 1 consecutive stacks, thus is not allowed.

We define the set sums p̃ i = ∑pj ∈Ai pj and call a move a CN(3, 2) winning move if play on
the stacks in the sets Ai results in equal set sums in p′ . A CN(3, 2)-equivalent position
that has equal set sums is called a CN(3, 2)-equivalent 𝒫 -position.

CN(3, 2)-equivalent positions are perfectly suited for moves to S1 since the condi-
tions on the nonzero stacks in S1 require the equality of the trisum and two adjacent
stack heights (set sum of a single stack). However, we will also see that a CN(3, 2) win-
ning move can be used when there are additional inequality conditions on some of
the stacks as long as those conditions can be maintained. In other instances the sum
conditions may involve a stack outside the three sets, but the sum condition can be
achieved without play on that “outside” stack.
The proof of Proposition 2 will proceed as a sequence of lemmas, where we will
consider the individual cases according to the number of zeros and the location(s) of
the maximum values in the case where the position has no zero. We start by dealing
with positions that have at least two zero stacks.

Lemma 2 (Multiple zeros lemma). If p ∈ Sc and p has at least two stacks without to-
kens, then there is a move to p′ ∈ S1 ∪ S2 ∪ S4 .

Proof. Note that we will label the individual stacks as x, xi , y, and yj depending on
the symmetry of the position and the role the different stacks play. Typically, stacks
labeled x or xi are between zeros (short distance) or adjacent to zeros. Since the posi-
tions in S1 ∪ S3 ∪ S4 all have sum conditions that need to be satisfied, we will typically
use s to denote this target sum. We consider the case of two adjacent zeros, two ze-
ros separated by one stack and finally two zeros separated by two (or three) stacks,
where the distance is always assumed to be the shorter distance between any stacks.
Figure 9.5 shows the generic positions in each of the cases. Any position with at least
three zeros falls into either case (a) or case (b).
First, suppose there are two consecutive zeros in the position as shown in Fig-
ure 9.5a. Note that p = (x1 , 0, 0, x2 , y3 , y2 , y1 ) is CN(3, 2)-equivalent with sets A1 = {x1 },
A2 = {x2 }, and A3 = {y1 , y2 , y3 }. Thus we can make the CN(3, 2) winning move to p′ ∈
S1 ∪ S2 by adjusting the stacks in two of the Ai to make the set sums in p′ equal to the
minimal set sum in p. This can be achieved with play on four stacks or fewer. Note that
we move to S2 when either x1 or x2 (or both) equal zero, that is, we have at least three
consecutive zeros, and p′ is the terminal position in that case.
148 | M. Dufour et al.

Figure 9.5: Generic positions with at least two zeros. (a) Two consecutive zeros. (b) Two zeros sepa-
rated by one stack. (c) Two zeros separated by two stacks.

Now we can assume that any zeros in p are isolated, that is, they are either separated
by one stack or by two stacks. Let us first consider the case of two zeros separated by
a single stack, that is, p = (0, x, 0, y1 , y2 , y3 , y4 ) with min{x, y1 , y4 } > 0 because of the
isolated zero condition (see Figure 9.5b). Our goal is to move to S4 . Due to the zeros
(which will also be the minima in p′ ), the sum conditions of S4 reduce to x′ = y1′ +
y2′ = y3′ + y4′ with min{y1′ , y4′ } > 0. Since p is CN(3, 2)-equivalent with sets A1 = {x},
A2 = {y1 , y2 }, and A3 = {y3 , y4 }, we can make the CN(3, 2) winning move to p′ . Note
that we can achieve the condition min{y1′ , y4′ } > 0 because the original stacks were
nonzero, and any set of two stacks that is being played on can be adjusted to achieve
the desired sum without making y1 or y4 equal to zero because x > 0 by the assumption
of the isolated zeros. However, if in the process we need to make y2′ = y3′ = 0, then the
resulting position is in S1 .
Next we turn to the case where the zeros are separated by two stacks, that is,
p = (0, x1 , x2 , 0, y2 , y, y1 ) with min{x1 , x2 , y1 , y2 } > 0 since we assume isolated zeros (see
Figure 9.5c). We also assume without loss of generality that y2 ≥ y1 . Now we need to
consider two subcases y1 ≥ x1 and y1 < x1 . Note that for each of the subcases, the sum
s will be defined on a case-by-case basis.
In the first case, we let s = min{x1 + x2 , y1 } and move to p′ = (0, x1 , x2′ , 0, s, 0, s) ∈ S4
with x1 + x2′ = s. While this looks as if there is play on five stacks, actually either x2 or y1
will remain the same. If s = y1 , then play is on the x2 , 0, y2 , and y stacks, and because
x1 ≤ y1 , we have x2′ = s − x1 = y1 − x1 ≥ 0. If s = x1 + x2 , then play is on the three y stacks.
Now we look at y1 < x1 , which is a little bit more involved. Here our goal is to move
to S1 , so we need to create a pair of zeros. Since y2 ≥ y1 , we choose x2′ = 0 and show
that we can make x1′ , y2′ , and the trisum 0 + y1′ + y′ equal in p′ . Let s = min{x1 , y1 + y, y2 }.
Unless s = y2 with y > y2 , we can move to p′ = (s, 0, 0, s, y′ , y1′ , 0) with y′ + y1′ = s by
playing on at most four stacks as follows: If s = x1 , then play is on stacks x2 , y2 , and y
with y′ = s − y1 = x1 − y1 > 0. If s = y1 + y, then play is on stacks x1 , x2 , and y2 . Finally,
if s = y2 ≥ y, then play is on stacks x2 , x1 , and y1 with y1′ = y2 − y ≥ 0.
This leaves the case of y1 < x1 , y1 ≤ y2 , s = y2 < {x1 , y1 + y} with y > y2 unresolved.
This set of inequalities can be simplified to y1 ≤ y2 , y1 < x1 , and y2 < {x1 , y}. Note specif-
Circular Nim games CN(7, 4) | 149

ically that y > yi for i = 1, 2. We need to make further distinctions as to where the maxi-
mal value occurs. In all cases, we will move to S1 , but the location of the maximal value
determines where the pair of adjacent zeros is created. Let M = max(p) = max{x1 , x2 , y}
(all other stacks cannot be maximal due to the inequalities).
First, we consider the case where the maximal value occurs next to a zero, that is,
M = x1 or M = x2 . Let s = min{x1 + y1 , x2 + y2 , y} and assume that M = x1 . We claim that
there is a legal move to p′ ∈ S1 where p′ = (0, s, x2′ , 0, y2 , s, 0) with x2′ + y2 = s. Note that
M = x1 implies that s < x1 + y1 because s = x1 + y1 leads to a contradiction. Specifically,
because yi > 0 due to isolated zeros, we would have x1 < x1 + y1 = s ≤ y ≤ M = x1 . If
s = x2 + y2 , then play is on stacks x1 , 0, y1 , and y, and it is a legal move since x1 = M ≥
y ≥ s. If s = y, then play is on stacks y1 , 0, x1 , and x2 with x2′ = s − y2 = y − y2 > 0. Since
y > yi , the same proof, except with subscripts 1 and 2 changing places, applies when
M = x2 .
The final case is where M = y > max{x1 , x2 }. We first consider x1 > x2 and let
s = min{x1 , x2 + y2 }. Then the move is to p′ = (0, s, x2 , 0, y2′ , s, 0) ∈ S1 with x2 + y2′ = s. If
s = x1 , then play is on y1 , y, and y2 . The move is legal since y > x1 and y2′ = x1 − x2 > 0.
On the other hand, if s = x2 + y2 , then play is on stacks y, y1 , 0, and x1 , and y > x1 > s.
When x1 ≤ x2 , then the move is to p′ = (0, x1 , s, 0, 0, s, y1′ ) ∈ S1 with y1′ + x1 = s and
s = min{x2 , x1 +y1 }. The proof follows like in the case x1 > x2 . This completes the case of
two zeros that are two stacks apart and therefore the case of more than two zeros.

We next consider the case of a single isolated zero.

Lemma 3 (Unique zero lemma). If a position p ∈ Sc has a unique zero, then there is a
move to p′ ∈ S.

Figure 9.6: Generic position with a unique zero.

Proof. The generic position for this case is shown in Figure 9.6. Note that due to the
assumption of the unique zero, we have that all other stack heights are nonzero, so
xi > 0 and yi > 0 for i = 1, 2, 3. We may also assume without loss of generality that
x2 ≥ y2 . We will see that in almost all cases, we can move to S1 ; there is a single sub-
case where we will move to S4 . Table 9.1 gives a quick overview of the structure of the
subcases.
150 | M. Dufour et al.

Table 9.1: Subcases for unique zero.

x1 + y1 ≤ min{x2 , y2 } = y2 (a) p′ ∈ S1
y2 ≥ y1 (b) p′ ∈ S1
x1 + y1 > y2 x2 ≥ y1 (c) p′ ∈ S1
y2 < y1
x2 < y1 (d) p′ ∈ S1 ∪ S4

(a) If s = x1 + y1 ≤ min{x2 , y2 }, then we can move to p′ = (0, x1 , s, 0, 0, s, y1 ) ∈ S1 .


(b) When x1 +y1 > y2 ≥ y1 , we have that y2 y1 0x1 x2 is a shallow valley, and by the valley
lemma there is a move to S1 .
(c) Since y1 > y2 implies that x1 + y1 > y2 , the conditions of this case reduce to y1 > y2 ,
x2 ≥ y1 , and x2 ≥ y2 . Let s = min{y1 , y2 + y3 + x3 }. The goal is to keep stacks y2 and
0 and then adjust the other stacks according to the value of s. If s = y1 , then we
move to p′ = (0, y1 , y2 , y3′ , x3′ , s, 0) ∈ S1 with y2 + y3′ + x3′ = s = y1 ; otherwise, we
move to p′ = (0, y1′ , y2 , y3 , x3 , s, 0) ∈ S1 with y1′ = s = y2 + y3 + x3 . These two moves
are legal because y3′ + x3′ = s − y2 = y1 − y2 > 0 and x2 ≥ y1 ≥ s.
(d) The conditions for this case, namely y2 < y1 , x2 < y1 , and x2 ≥ y2 , reduce to
y2 ≤ x2 < y1 . We distinguish between two main cases, namely whether x3 + y3 ≤
min{x1 , y1 } or not. We first consider the case x3 + y3 ≤ min{x1 , y1 }.
– If y2 < s = x3 + y3 ≤ min{x1 , y1 }, then we can move to p′ = (0, s − y2 , y2 , y3 , x3 ,
0, s) ∈ S4 . Since min{s − y2 , y2 , x3 } > 0, the conditions of S4 are satisfied.
– If s = x3 + y3 ≤ y2 ≤ x2 , then x2 x3 y3 y2 y1 is either a shallow valley or a deep
valley, depending on whether x3 + y3 + y2 > x2 or x3 + y3 + y2 ≤ x2 , and there
is a move to S1 .
Now we look at the second case, x3 + y3 > x1 or x3 + y3 > y1 . We show that with this
condition alone (disregarding the overall conditions of subcase d), we can show
that there is a move to S4 ∪S1 . We can therefore assume, without loss of generality,
that x1 ≥ y1 and consider two subcases x1 ≥ x3 + y3 > y1 and x3 + y3 > x1 .
– If x1 ≥ x3 + y3 > y1 and x3 + y3 > y1 + y2 , then we let s = y1 + y2 and can move to
p′ = (0, y1 , y2 , y3′ , x3′ , 0, s) ∈ S4 with y3′ +x3′ = s. We can adjust the sum y3′ +x3′ such
that x3′ > 0. Also, min{y1 , y2 } > 0, so the S4 conditions are satisfied. If, on the
other hand, x3 + y3 ≤ y1 + y2 , then we can move to p′ = (0, y1′ , y2 , y3 , x3 , 0, s) ∈ S4
with s = x3 + y3 and y1′ = s − y2 > 0, and the S4 conditions are satisfied.
– If x3 + y3 > max{x1 , y1 } and x1 ≥ y1 + y2 = s, then we can move to p′ =
(0, y1 , y2 , y3′ , x3′ , 0, s) ∈ S4 with s = y1 + y2 = y3′ + x3′ . Note that, once more,
min{x1 , x3 + y3 } ≥ y1 + y2 = s, so the move is legal. Finally, assume that
y1 + y2 > x1 = s. Now we have a move to p′ = (0, y1 , y2′ , y3′ , x3′ , 0, x1 ) ∈ S4 ∪ S1
with y1 + y2′ = x3′ + y3′ = s. Since y1 ≤ s, we can make the sum y1 + y2′ = s, and
we can also adjust the sum x3′ + y3′ while keeping x3′ > 0. If y3′ = y2′ = 0, then
p′ ∈ S1 ; otherwise, p′ ∈ S4 .

This completes the proof in the case of exactly one zero.


Circular Nim games CN(7, 4) | 151

Finally, we deal with the case where the position p has no zero. In this case, we
divide the positions according to where the maximum is located in relation to other
maxima (if any). Note that when min(p) > 0, there is a close relation between positions
in S3 and S4 . A position p = (m, M, m, p4 , p5 , p6 , p7 ) with p4 + p5 = p6 + p7 = M + m and
min{p4 , p7 } > m is in S4 if max{p5 , p6 } > m and is in S3 if p5 = p6 = m. Therefore there is
a move to S3 ∪ S4 , and we need only check on the sum and minimum conditions. This
property will be used repeatedly in the maximum lemma.

Lemma 4 (Maximum lemma). Let p ∈ Sc with min(p) > 0. Then there is a move from p
to p′ ∈ S.

Proof. Let M = max(p). We will first look at the antipodal case, where we have
two maxima “opposite” (at distance two) of each other. The generic position is
p = (x1 , x2 , M, y3 , y2 , y1 , M) shown in Figure 9.7b.

Figure 9.7: Generic positions for antipodal maxima. (a) y3 = M and (b) y3 < M.

Table 9.2 shows the subcases we will consider for antipodal maxima. Without loss of
generality, we may assume that y3 ≤ y1 .

Table 9.2: Subcases for antipodal maxima.

y3 = M (a) p′ ∈ S1 ∪ S3 ∪ S4
y2 + y3 ≤ M (b1) p′ ∈ S1
y3 < M x1 ≥ x2 (b2) p′ ∈ S3
y2 + y3 > M
x1 < x2 (b3) p′ ∈ S3 ∪ S4

(a) We start with the case M = y3 shown in Figure 9.7a. Note that since y3 ≤ y1 , we also
have y1 = M. In this case the generic position becomes p = (x1 , x2 , M, M, y, M, M).
We may also assume in this case that without loss of generality, x1 ≤ x2 . If x1 + x2 <
M, then Mx1 x2 MM forms a shallow valley, and there is a move to S1 . Now assume
that M ≤ x1 + x2 ≤ M + y. In this case, there is a move to p′ = (x1 , x2 , x1 , M, x1 + x2 −
152 | M. Dufour et al.

M, x1 + x2 − M, M) ∈ S3 . We can make the necessary adjustments since x1 ≤ M =


max(p), and M ≥ y ≥ x1 + x2 − M ≥ 0 by assumption. Finally, when M + y < x1 + x2 ,
we can move to p′ = (x1′ , x2′ , y, M, y, M, y) ∈ S4 with x1′ + x2′ = M + y. Note that
M + y < x1 + x2 implies that M > y. We need to show that we can adjust the x1 and
x2 stacks such that x1′ > y and x2′ > y to satisfy the S4 conditions. This is possible
since x1 + x2 > M + y ≥ y + 1 + y = 2y + 1.

We now assume that M > y3 (see Figure 9.7b) and consider the subcases listed in
Table 9.2.
(b1) Since M ≥ y2 + y3 , position p is either shallow valley (if y1 + y2 + y3 > M) or deep
valley (if y1 + y2 + y3 ≤ M), so there is a move to p′ ∈ S1 .

Now let s = min{y2 + y3 , M + x1 , M + x2 }.


(b2) Since x1 ≥ x2 , s = y2 + y3 or s = M + x2 . In either case there is a move to
p′ = (s − M, s − M, M, y3 , y2′ , y3 , M) ∈ S3 with y2′ = s − y3 that only uses play on four
stacks. If s = y2 +y3 , then s ≤ min{M +x1 , M +x2 }, so s−M ≤ min{x1 , x2 }, and y3 ≤ y1
by assumption. Also, y2′ = s − y3 = y2 , so play is on the x2 , x1 , M, and y3 stacks.
We also have that s = y2 + y3 > M and y3 < M imply that 0 < s − M < s − y3 = y2 ,
so the inequality conditions of S3 are satisfied.
On the other hand, if s = M + x2 , then s − M = x2 , so play is on the x1 , M, y1 , and y2
stacks. By assumption, x1 ≥ x2 , y1 ≥ y3 . Because M + x2 < y2 + y3 , we have that
y2′ = M + x2 − y3 ≤ y2 . Also, M > y3 implies that y2′ > x2 > 0, so all conditions of
S3 are satisfied.
(b3) Since x1 < x2 , s = y2 + y3 or s = M + x1 . The first case is covered in (b2) since
the argument for s = y2 + y3 did not use x1 ≥ x2 . If s = M + x1 ≤ y2 + y3 , we
move to p′ = (x1 , x2 , s − x2 , y3′ , y2′ , x1 , M) ∈ S3 ∪ S4 with y2′ + y3′ = M + x1 = s,
playing on the one of the M stacks and the yi stacks. This move is legal because
s − x2 = M + x1 − x2 < M and y1 ≥ y3 ≥ M + x1 − y2 ≥ x1 . It remains to show that
min{x2 , y2′ } > x1 . By assumption of this case, x2 > x1 , and 0 < M − y3 ≤ y2 − x1
shows that we can satisfy the sum condition with y2′ > x1 .

This completes the case of antipodal maxima. We now consider the case where
M > max{x3 , y3 }, so the stacks that are opposite of M have strictly smaller height.
Our generic position is shown in Figure 9.8. Without loss of generality, we may as-
sume that x1 ≤ y1 . Once more, we move to either p′ ∈ S1 or p′ ∈ S3 ∪ S4 . Now let
s = min{M + x1 , x2 + x3 , y2 + y3 }.
– If s = M + x1 , then we can move to p′ = (M, x1 , y2 , y3′ , x3′ , x2 , x1 ) ∈ S3 ∪ S4 with
x2 + x3′ = y2 + y3′ = M + x1 . Play is on the yi stacks and x3 ; the move is legal because
x1 ≤ y1 by assumption, x3′ = M + x1 − x2 ≤ x3 , and x3′ > 0 since M = max(p)
and all stack heights are positive. Likewise, 0 < y3′ ≤ y3 . It remains to show that
min{x2 , y2 } > x1 . By assumption, M > max{x3 , y3 } which implies both 0 < M − x3 ≤
x2 − x1 and 0 < M − y3 ≤ y2 − x1 , so the move is legal.
Circular Nim games CN(7, 4) | 153

Figure 9.8: Generic position when M > max{x3 , y3 }.

– If s = x2 + x3 and y3 ≥ s = x2 + x3 , then M > y3 implies that p is either shallow


valley (if y3 < x1 + x2 + x3 ) or deep valley (if y3 ≥ x1 + x2 + x3 ), and there is a move
to S1 . If y3 < s = x2 + x3 < M + x1 , then we move to p′ = (M ′ , m′ , y2′ , y3 , x3 , x2 , m′ ) ∈ S4
with M ′ = min{s, M}, m′ = s − M ′ = max{0, s − M} ≥ 0 and y2′ + y3 = s. Let us check
that this move is legal. If M ≥ s, then we can clearly create the M ′ and m′ stacks.
If M < s, then m′ = s − M > 0 and s − M < x1 ≤ y1 , so the adjustment on the M ′
and m′ stacks is legal. Next, we consider the y2 stack. Because y3 < s and y3 < M,
we have y2′ = s − y3 > max{0, s − M} = m′ ≥ 0. Finally, we have that x2 > 0 (by
assumption of no zero stacks) and also x2 > x2 + x3 − M = s − M since x3 < M, so
in either case, x2 > m′ , and the conditions of S4 are satisfied.
– If s = y2 + y3 , then the same arguments apply as in the case s = x2 + x3 , with the
roles of x and y interchanged except for the inequality that s < M + x1 .

This completes the proof of the maximum lemma.

These three lemmas together prove Proposition 2, because each position either
has multiple zeros, a unique zero, or no zero. Together with Proposition 1 and Theo-
rem 1, we have shown that the set S of Theorem 2 is the set of 𝒫 -positions of CN(7, 4).

3 Discussion
Our goal in the investigations of CN(n, k) has always been to find a general structure of
the 𝒫 -positions for families of games. So far we have found such results for CN(n, 1),
CN(n, n), and CN(n, n − 1) (see [4]). In addition, in all previous results for CN(n, k), we
have been able to find a single description of the 𝒫 -positions. The case of CN(7, 4) is
seemingly an anomaly in that four different sets make up the 𝒫 -positions. However,
a careful look at the 𝒫 -positions of CN(3, 2), CN(5, 3), and CN(7, 4), which are all ex-
amples of CN(2ℓ + 1, ℓ + 1), reveals a common structure. Recall that the 𝒫 -positions
of CN(3, 2) are given by {a, a, a} for a ≥ 0 and the 𝒫 -positions of CN(5, 3) are given by
{(x, 0, x, a, b) | x = a + b}. This leads to the following result.
154 | M. Dufour et al.

Lemma 5. In the game CN(2ℓ + 1, ℓ + 1) the set of 𝒫 -positions contains the set S1 , where

󵄨󵄨 ℓ
0, . . . , 0, x, a1 , . . . , aℓ ) 󵄨󵄨󵄨 ∑ ai = x}.
S1 = {p = (x, ⏟⏟⏟⏟⏟⏟⏟⏟⏟⏟⏟⏟⏟
󵄨
ℓ−1 i=1

Proof. Observe that all positions in CN(2ℓ + 1, ℓ + 1) with ℓ − 1 consecutive zeros are
CN(3, 2)-equivalent with sets {p1 }, {pℓ+1 }, and {pℓ+2 , . . . , p2ℓ+1 }, and that S1 consists pre-
cisely of the CN(3, 2)-equivalent 𝒫 -positions. Therefore we cannot make a move from
S1 to S1 because this would amount to a move from a 𝒫 -position in CN(3, 2) to another
𝒫 -position in CN(3, 2). On the other hand, we can make a CN(3, 2) winning move into
S1 from any position in CN(2ℓ + 1, ℓ + 1) that has ℓ − 1 consecutive zeros. Therefore S1
must be a subset of the 𝒫 -positions of CN(2ℓ + 1, ℓ + 1).

Although Lemma 5 does not settle the question regarding the set of 𝒫 -positions
of the family of games CN(2ℓ + 1, ℓ + 1), the result shows that the set S1 for CN(7, 4),
which has the requirement of the zero minima, is not an anomaly, but a fixture among
the 𝒫 -positions of this family of games. Note that for CN(3, 2) and CN(5, 3), the set of
𝒫 -positions equals S1 . These two games are too small to show the more general struc-
ture of the 𝒫 -positions of this family. The question arises whether there are gener-
alizations of S2 , S3 , or S4 that constitute a part of the 𝒫 -positions of other games in
this family. The obvious candidate would be S2 with all equal stack heights. Interest-
ingly enough, this set is NOT a part of the 𝒫 -positions (except for the terminal po-
sition) of CN(9, 5). For example, the position (2, 2, 2, 2, 2, 2, 2, 2, 2) is an 𝒩 -position of
CN(9, 5).

Bibliography
[1] M. H. Albert, R. J. Nowakowski, and D. Wolfe, Lessons in Play, A. K. Peters Ltd., Wellesley, MA,
2007.
[2] E. R. Berlekamp, J. H. Conway, and R. K. Guy, Winning Ways for Your Mathematical Plays,
second edition, A. K. Peters Ltd., Wellesley, MA, 2014.
[3] C. L. Bouton, Nim, a game with a complete mathematical theory, Ann. of Math. (2) 3, (1901/02),
35–39.
[4] M. Dufour and S. Heubach, Circular Nim games, Electron. J. Combin. 20(2), (2013), P22.
[5] R. Ehrenborg and E. Steingrímsson, Playing Nim on a simplicial complex, Electron. J. Combin.
3(1), (1996), R9, 33 pages.
[6] T. S. Ferguson, Some chip transfer games, Theoret. Comput. Sci. 191 (1998), 157–171.
[7] A. S. Fraenkel, Combinatorial games: selected bibliography with succinct gourmet introduction,
Electron. J. Combin. DS2 (2012), 109 pages.
[8] D. Gale, A curious Nim-type game, Amer. Math. Monthly 81 (1974), 876–879.
[9] R. K. Guy, Impartial games, in Games of No Chance, MSRI Publications, 29 (1996), 61–78.
[10] D. Horrocks, Winning positions in simplicial Nim, Electron. J. Combin. 17 (1) (2010), 13 pages.
[11] E. H. Moore, A generalization of a game called Nim, Ann. of Math. 11 (1910), 93–94.
[12] A. J. Schwenk, Take-away games, Fibonacci Quart. 8, (1970), 225–234.
Circular Nim games CN(7, 4) | 155

[13] R. Sprague, Über zwei Abarten von Nim, Tohoku Math. J. 43 (1937), 351–354.
[14] A. Vo and S. Heubach, Circular Nim games CN(7,4). Cal State LA, (2018), 75 pages (thesis).
[15] M. J. Whinihan, Fibonacci Nim, Fibonacci Quart. 1–4 (1963), 9–13.
[16] W. A. Wythoff, A modification of the game of Nim, Nieuw Arch. Wiskd. 7 (1907), 199–202.
Aaron Dwyer, Rebecca Milley, and Michael Willette
Misère domineering on 2 × n boards
Abstract: Domineering is a well-studied tiling game, in which one player places verti-
cal dominoes, and a second places horizontal dominoes, alternating turns until some-
one cannot place on their turn. Previous research has found game outcomes and val-
ues for certain rectangular boards under normal play (last move wins); however, noth-
ing has been published about domineering under misère play (last move loses). We
find optimal-play outcomes for all 2 × n boards under misère play: these games are
Right-win for n ⩾ 12. We also present algebraic results including sums, inverses, and
comparisons in misère domineering.

1 Introduction
The game of domineering has two players alternately placing dominoes to tile a check-
erboard or any other grid. The player called Left can only place dominoes in a vertical
orientation, and the player called Right can only place horizontally. Domineering is
a combinatorial game because there is perfect information and no chance, and it is
partizan (as opposed to impartial) because the two players have different move op-
tions. In normal-play combinatorial games, the first player unable to move on her/his
turn loses; under misère play, the first player unable to move is the winner. This paper
considers domineering under misère play.
A game G is defined by the sets of Left options and Right options that the corre-
sponding player can reach with a single move. We use Gm×n to denote a game of dom-
ineering on an empty m × n board. So, for example, G2×2 has one Left option to G2×1
and one Right option to G1×2 .

Acknowledgement: A. Dwyer was supported in part by a Natural Sciences and Engineering Research
Council of Canada Undergraduate Student Research Award. R. Milley was supported in part by a Natural
Sciences and Engineering Research Council of Canada Discovery Grant. M. Willette was supported
in part by a Natural Sciences and Engineering Research Council of Canada Undergraduate Student
Research Award.

Aaron Dwyer, School of Mathematics and Statistics, Carleton University, Ottawa, Canada, e-mail:
aarondwyer@cmail.carelton.ca
Rebecca Milley, Computational Mathematics, Grenfell Campus, Memorial University, St. John’s,
Canada, e-mail: rmilley@grenfell.mun.ca
Michael Willette, Department of Applied Mathematics, University of Waterloo, Waterloo, Canada,
e-mail: mwillette@uwaterloo.ca

https://doi.org/10.1515/9783110755411-010
158 | A. Dwyer et al.

Given any game position G, the outcome o(G) is the winner under optimal play.
There are four possibilities:

{
{ ℒ if Left wins G whether she goes first or second,
{
{
{
{ℛ if Right wins G whether he goes first or second,
o(G) = {
{𝒩
{
{ if the next player to move in G wins,
{
{
{𝒫 if the previous player (i.e., not the next player) wins.

By o− (G) we denote the outcome of G under misère play and by o+ (G) the outcome
under normal play. For example, o− ( ) = 𝒩 and o+ ( ) = ℛ. The zero game, in
which there are no moves for either player (e. g., a 1 × 1 board in domineering), has
o− (0) = 𝒩 and o+ (0) = 𝒫 . The negative of a game G, denoted −G, is the game G with
the roles of Left and Right swapped; in domineering, this is equivalent to rotating G
by 90 degrees. The disjunctive sum of two games G and H is the game G + H in which,
on their turn, a player can choose to play in G or in H. In domineering, as players
place pieces, a single connected board often breaks into a disjunctive sum of disjoint
boards: for example, if Left plays in the third column of G2×6 , then the new position is
G2×2 + G2×3 .
Two games G and H are equal if they can be interchanged in any sum without
affecting the outcome, that is, if o(G +X) = o(H +X) for any sum of games X. Inequality
is defined by G ⩾ H if o(G + X) ⩾ o(H + X) for all X, where outcomes are ordered
according to preference by Left: ℒ > 𝒩 > ℛ and ℒ > 𝒫 > ℛ with incomparable 𝒩
and 𝒫 . Equality and inequality are dependent on the ending condition; games can
be equal or comparable in normal play but not in misère play, etc. In normal play,
G + (−G) = 0 for all games G.
Normal-play domineering has been the subject of numerous papers by math-
ematicians and computer scientists. Elwyn Berlekamp found the normal-play out-
comes and values for positions in 2 × n and 3 × n domineering in his 1988 paper [2].
Since that time, computer programs have been developed to find the normal-play
outcome of rectangular boards: up to 9 × 9 were solved by the computer program
developed in [3]; this was extended to 10 × 10 by [4] and finally to 11 × 11 by [10]. In
[5], theoretical and computational techniques were used to determine the outcomes
of all 2 × n boards under normal play: for n ⩾ 28, the boards are all Right-win.
What about misère play? The primary purpose of this paper is finding outcomes
of all 2 × n games of domineering under misère play. In general, misère play is much
less studied; although the standard definitions of addition, negation, equality, and
inequality can be applied, there are many problems with the algebra. For example, if
G ≠ 0, then G and −G never sum to zero in general misère play [7], and even in re-
stricted play (see Section 3), most games are not invertible. Another problem is that
knowing the misère outcome of two games gives no information about the misère out-
come of their sum [7]; in Section 3.1, we show that this property is true even when
Misère domineering on 2 × n boards | 159

restricted to domineering positions. For these and other reasons, it is much more diffi-
cult to analyze misère games using the usual game-theoretic techniques. Indeed, our
solution for 2 × n boards is purely combinatorial.
The remainder of the paper is structured as follows. In Section 2, we present the
solution for 2 × n domineering. In Section 3, we consider a number of algebraic prop-
erties of misère domineering, including outcomes of sums (Section 3.1), invertibility
(Section 3.2), and comparisons (Section 3.3) of certain 2 × n positions. Section 4 gives
a summary and further discussion.

2 Misère outcomes of 2 × n domineering


Let kG denote a disjunctive sum of k copies of the same position G. Consider the fol-
lowing two games:

2G2×2 G2×4

Note that in normal-play, is its own additive inverse, and + = 0; in misère


play, even restricting only to domineering positions, + is not zero.1 We claim that
Right will always prefer the 2 × 4 board.2 The intuition is as follows: if Right has a good
strategy on 2G2×2 , then when playing on G2×4 , Right can just pretend that the board has
been sliced down the middle and follow his good strategy on 2G2×2 . So Right will do at
least as well on the 2×4 board as on two disjoint 2×2 boards. The intuition generalizes
to more than two copies of G2×2 , but note that it is not obvious or immediate: who is to
say that Left cannot force Right to play across the imaginary boundaries? We will show
that when desired, Right can control the game in this way. To see when that might be,
we first determine the misère outcome of multiple copies of G2×2 .

Lemma 1. The misère outcome of (2k)G2×2 is next-win, and the misère outcome of (2k +
1)G2×2 is previous-win.

Proof. We show winning strategies for Right, and the strategies for Left follow by sym-
metry. Right playing first on an even sum of 2 × 2 boards should use his first k moves to
“claim” half of the boards, placing one piece in each of k different boards. Left cannot
prevent this. Right should use the next k moves to play a second piece in each of those

1 To see + ≠ 0 in misère play, we need a “distinguishing” game X with o( + +X) ≠ o(0+X).


Let X = 2G2×1 . The misère outcome of + + + is 𝒫, whereas the misère outcome of + is ℛ.
2 “Right prefers G2×4 over G2×2 + G2×2 ” is equivalent to the inequality G2×2 + G2×2 ⩾ G2×4 . We will show
that this is true (modulo a restricted set of games) in Section 3.3 using a result from [6].
160 | A. Dwyer et al.

boards (i. e., Right plays in all the positions he just created). In total, Right places
2k pieces. During this time, there are exactly 2k moves available for Left among the
other k boards. Left as the second player will get the last move, and so Right wins.
The same strategy works for Right playing second on an odd sum of 2 × 2 boards:
this time, after Left and Right have each made 2k moves, there is an extra 2 × 2 board
remaining (or possibly two 2×1 boards), and it is Left’s turn next. Left is forced to move
to , and Right wins.

We now analyze 2 × n boards. A program was written in Python to determine the


outcome of m × n domineering boards under misère play (see the Appendix for other
computational results with m > 2). The following outcomes were determined by hand
and confirmed computationally:

n 0 1 2 3 4 5 6 7 8 9 10 11

o (G2×n )

𝒩 ℛ 𝒫 ℒ 𝒩 ℛ 𝒫 𝒩 𝒩 ℛ ℛ 𝒩

These initial cases do not indicate a pattern in the outcomes; fortunately, the next 12
(solved computationally) do:

n 12 13 14 15 16 17 18 19 20 21 22 23

o− (G2×n ) ℛ ℛ ℛ ℛ ℛ ℛ ℛ ℛ ℛ ℛ ℛ ℛ

As in normal play, Right appears to have the advantage in a 2×n domineering board for
large enough n. Indeed, we will now show that for n ⩾ 12, all 2×n boards are Right-win.
(Interestingly, in normal play, other outcomes are possible until n ⩾ 28.) The strategy
for Right depends on the congruency of n modulo 4, and so we prove the result across
four separate theorems (Theorems 1–4).
To begin, we define some standards moves for Right in 2 × n domineering (see
Figure 10.1). Two Right pieces are adjacent if they are in the same row and occupy
consecutive columns, stacked if they are in the same two columns of different rows,
and staggered if they are in different rows and share exactly one column. We say “Right
makes a stacked move” to mean that Right places a piece that creates a pair of stacked
pieces. In some of the strategies described further, Right places two adjacent pieces to
guarantee that he can make a staggered move later in the game. We must show that
Left cannot prevent Right from placing one or two pairs of adjacent pieces, as needed,
as long as n is sufficiently large; this is done in Lemma 2.

Lemma 2. In a game of domineering on an empty 2 × n board:


(i) Right moving first can place his first two pieces adjacent if n ⩾ 6.
(ii) Right moving second can place his first two pieces adjacent if n ⩾ 12.
Misère domineering on 2 × n boards | 161

Figure 10.1: Adjacent, stacked, and staggered Right moves.

(iii) Right moving first can place his first four pieces as two disconnected pairs of adja-
cent pieces if n ⩾ 19.
(iv) Right moving second can place his first four pieces as two disconnected pairs of
adjacent pieces if n ⩾ 24.

Proof.
(i) If n ⩾ 6, then Right moving first can play in the middle of an empty 2 × 6 section
of the board. Left can only reply on one side or the other, and then Right’s second
piece can be placed on the opposite side, adjacent to his first piece. See Figure 10.2.

Figure 10.2: Right playing first in a 2 × 6 can place two pieces adjacent.

(ii) If n ⩾ 12 and Left plays first, then there is an empty 2 × 6 section on one side or the
other of Left’s first piece. By (i) Right can place two pieces adjacent in that section.
(iii) If n ⩾ 19, then Right’s first move should be in the middle of the first 2 × 6 section
of the board (i. e., across columns 3 and 4).
If Left’s first move is within that 2 × 6 section, Right should immediately place
the adjacent piece as in (i). Now there is still at least an empty 2 × 12 section of
the board starting after column 7, and so by (ii) Right playing second from here
can place another two adjacent pieces after column 7. Note that Right may have to
avoid column 7 to ensure the pairs of adjacent pieces to be not connected.
If Left’s first move is not in the original 2 × 6 section, but rather somewhere in
columns 7 to 19, then as in (ii), Right can play in an empty 2 × 6 section within the
last 12 columns. Right can place his third and fourth pieces adjacent to his first
and second pieces (or in the other order, if threatened by Left).
(iv) If n ⩾ 24, then Right moving second can place a piece in the middle of the first
2 × 6 section of the board, assuming (without loss of generality) that Left placed
her first piece in the second half of the board.
If Left replies within that section, then Right will place his adjacent piece as in (i).
Left then makes a third move with at most two in the latter 24 − 7 = 17 columns of
162 | A. Dwyer et al.

the board (Right will avoid column 7 to ensure the pairs of adjacent pieces to be
not connected). At most two Left moves in a 2 × 17 section will necessarily leave
an empty 2 × 6 section, with Right to move next, so Right can place another two
adjacent pieces by (i).
If Left does not reply in the first 2 × 6 section, then after her second move, Left has
placed two pieces in the rightmost 2 × 18 section of the board; this still guaran-
tees an empty 2 × 6 section in the rightmost 2 × 17 section (avoiding column 7),
in which Right can place his second piece. As in (iii), Right can place his third
and fourth pieces adjacent to his first and second pieces (or second and first, if
necessary).

As noted above, Right’s strategy for G2×n will depend on the congruency of n mod-
ulo 4. In several cases the strategy will lead to a position of the form shown in Fig-
ure 10.3: a 2 × n position whose empty squares consist of an equal number of 2 × 1
and 1 × 4 sections, where the 1 × 4 sections are not adjacent to each other (but may be
connected by one or more of the 2 × 1 sections). Lemma 3 shows that Right can always
win these particular end-game positions.

(a)

(b)

(c)
Figure 10.3: Positions in 2 × n domineering whose empty squares consist of nonadjacent 1 × 4 pieces
and the same number of (possibly connected or connecting) 2 × 1 pieces.

Lemma 3. If the empty squares in a 2 × n domineering position consist of an equal num-


ber of 2 × 1 and 1 × 4 sections, with no two 1 × 4 sections adjacent, then Right can win
this position playing first.

Proof. Right should play in the middle of each 1 × 4 section; meanwhile, Left has no
choice but to take the 2 × 1 sections one at a time. Right may temporarily create a piece
like or or , but Left will play in the 2 × 1 section(s) of those before Right
runs out of 1×4 middle moves, because there are the same number of (2×1)s as (1×4)s.
When Left takes the last 2 × 1, there are no moves remaining, and Right wins.
Misère domineering on 2 × n boards | 163

We are now ready for the main results.

Theorem 1. If n ≡ 0 (mod 4) and n ⩾ 12, then a 2 × n domineering board is Right-win


under misère play; that is,

o− (G2×4k ) = ℛ for n = 4k ⩾ 12.

Proof. Assume that n = 4k ⩾ 12. If Right plays first on a 2 × n board, he should pretend
that the board is cut into 2 × 2 pieces and play the winning next-player strategy (as per
Lemma 1). Right can do this by first placing k pieces anywhere along the bottom row,
effectively claiming k 2 × 2 boards, and then playing directly above those k pieces. Left
cannot prevent Right from making these stacked moves. With each of the first k Right-
Left moves, three bottom-row spaces are taken, so that after Left’s kth move, exactly
k of the 4k columns remain empty. Left will be forced to take all k of these spaces as
Right plays his k stacked moves in the top row, and since Right went first, Right will
run out of moves first.
Right playing second is not as straightforward; Right should not play as if the
board were cut into (2k)G2×2 because that is a next-win position. Right must change the
parity using a staggered move. To set himself up for a staggered move at the end of the
game, Right will place two pieces adjacent, which we know he can do by Lemma 2(ii).
Here is Right’s strategy: place two adjacent pieces in the bottom row and then place
k − 2 more bottom pieces, for a total of k bottom pieces, as before. Since Left went first,
after Right’s kth move, there are 4k − 3k = k empty columns. Now Left has to begin
taking those empty columns. Right plays k − 2 stacked moves above all but his first two
pieces, and after that, there are two empty columns remaining, as well as an empty
1 × 4 section above Right’s first two pieces. It is Left’s turn: she takes one column, leav-
ing exactly one 1 × 4 and one 2 × 1, possibly connected. By Lemma 3 Right wins from
here with a staggered move.

We see for n ≡ 0 (mod 4) that Right playing first is “easy” and involves only
stacked moves for Right, whereas Right playing second requires Right to break par-
ity using a staggered move. We will see the same situation (but vice versa) for n ≡ 2
(mod 4). The hardest case is n ≡ 3 (mod 4), where staggered moves are required for
Right going first and second. It turns out that n ≡ 1 (mod 4) is the simplest case: Right
only ever needs to place stacked pieces, going first or second.

Theorem 2. If n ≡ 1 (mod 4), then a 2 × n domineering board is Right-win under misère


play; that is,

o− (G2×4k+1 ) = ℛ for n = 4k + 1.

Proof. The case n = 1 is clear. For larger n = 4k + 1, Right playing first or second should
follow the “cut up” strategy from the 4k case, that is, Right should place his first k
pieces in the bottom row and then place k stacked pieces. After each player has made
164 | A. Dwyer et al.

2k moves, Right has occupied 2k columns, and Left has occupied 2k columns, leaving
exactly one column empty. If it is Right’s turn next, then he has no move and wins; if
it is Left’s turn next, then she takes the last empty column, and then Right wins.

Theorem 3. If n ≡ 2 (mod 4) and n ⩾ 22, then a 2 × n domineering board is Right-win


under misère play; that is,

o− (G2×4k+2 ) = ℛ for n = 4k + 2 ⩾ 22.

Proof. For n = 4k + 2 ⩾ 22, Right playing second should place k pieces in the bottom
row followed by k stacked moves in the top row. After Right’s (2k)th move, each player
has taken 2k columns, so that only 2 columns remain, with Left to move. The columns
could be adjacent, forming a 2 × 2 square or not; either way, Left moving next loses.
Recall that the first player in an odd sum of 2 × 2 boards not only loses, but loses
with another move to spare; e. g., if Right playing first here places only stacked pieces,
then Left will run out of moves, and there will still be another 1 × 2 position remaining.
So to prevent Left from winning, it will not suffice to make a single staggered move as
in the 4k case; Right will have to arrange to make two staggered moves to force Left
into the last move. Right should use his first four moves to place two pairs of adjacent
pieces in the bottom row, not all adjacent, as per Lemma 2(iii). Right should then place
another k − 3 pieces in the bottom row for a total of k + 1 moves (across 2k + 2 columns)
and then place k − 3 stacked pieces above the latter bottom moves. In this time, Left
has taken 2k − 2 columns, so that two empty columns remain, along with two empty
1 × 4 sections above Right’s first four moves. By Lemma 3 Right wins playing first from
here with two staggered moves.

Theorem 4. If n ≡ 3 (mod 4) and n ⩾ 27, then a 2 × n domineering board is Right-win


under misère play; that is,

o− (G2×4k+3 ) = ℛ for n = 4k + 3 ⩾ 27.

Proof. Assume that n = 4k+3 ⩾ 27. Right playing first will aim to set himself up to make
a staggered move at the end of the game. Since n > 6, Lemma 2(i) tells us that Right can
place his first two pieces adjacent in the bottom row. Right should then place another
k − 1 pieces in the bottom row for a total of k + 1 moves; after these k + 1 Right–Left
moves, 3(k + 1) spaces have been taken in the bottom row, leaving (4k + 3) − (3k + 3) = k
empty columns. Next, Right makes k − 1 stacked moves above his latter k − 1 bottom
moves, whereas Left places in k − 1 columns, leaving exactly one empty column along
with one empty 1 × 4 section above Right’s first two pieces. By Lemma 3 Right wins
playing first from here with a staggered move.
Right playing second will set himself up to make two staggered moves at the end
of the game. By Lemma 2(iv) Right playing second with n ⩾ 27 can use his first four
moves to place two pairs of adjacent pieces, not all adjacent, in the bottom row. Right
Misère domineering on 2 × n boards | 165

should then make another k − 3 moves in the bottom row for a total of k + 1; after these
k + 1 Left–Right moves, there are (4k + 3) − 3(k + 1) = k empty columns. Now Right
makes k − 3 stacked moves, whereas Left takes k − 3 columns, leaving three empty
columns, along with two empty 1 × 4 sections above Right’s first four moves, with left
to move. Left has to take one of the empty columns, which leaves two empty columns
and two empty 1×4s. By Lemma 3 Right wins from here with two consecutive staggered
moves.
With Theorems 1–4 and the base cases (n = 14, 15, 18, 19, 23) obtained computa-
tionally, we have the following main result.

Corollary 1. If n ⩾ 12, then a 2 × n domineering board is Right-win under misère play:

o− (G2×n ) = ℛ for n ⩾ 12.

3 The algebra of misère domineering


3.1 Sums
In normal play, outcomes of sums are somewhat dictated by the outcomes of the sum-
mands: for example, the sum of two Left-win games is always Left-win, the sum of a
next-win and a Left-win is either Left-win or next-win, and more. An interesting and
unfortunate fact about misère play, first proven by [7], is that the outcome of a sum of
two games is completely independent from the outcomes of each game: for any out-
comes 𝒪1 , 𝒪2 , 𝒪3 ∈ {𝒫 , 𝒩 , ℒ, ℛ}, there are games G and H such that

o− (G) = 𝒪1 , o− (H) = 𝒪2 , and o− (G + H) = 𝒪3 .

We have found that this property of misère games holds even if restricted to domineer-
ing; in fact, our examples (given in Table 10.1) are restricted to domineering positions
that fit within 2 × n and n × 2 boards.

3.2 Invertibility
For this and the next subsection, we need to define equivalence and inequality in re-
stricted game play. Two games are equivalent modulo 𝒰 for a set (universe) of games 𝒰
if they can be interchanged in any sum of games from 𝒰 without affecting the outcome:

G ≡𝒰 H if o(G + X) = o(H + X) for all X ∈ 𝒰 [9].

Note that this equivalence relation is weaker than the usual equality of games, for
which 𝒰 is taken to be any sum of game positions.
166 | A. Dwyer et al.

Table 10.1: Outcomes of sums of domineering positions, demonstrating the lawless addition of mis-
ère play.

ℒ 𝒩 𝒫 ℛ

𝒫 +𝒫 + + ( + )+ +

𝒫 +𝒩 + +( + ) +0 +

𝒫 +ℒ + + ( + )+ +

𝒫 +ℛ + + ( + )+ +

𝒩 +𝒩 ( + )+ 0+0 ( + )+ ( + )+

𝒩 +ℒ 0+ + + +

𝒩 +ℛ 0+ + + +

ℒ+ℒ + ( + )+ ( + )+ ( + )+

ℒ+ℛ + + ( + )+ +( + )

ℛ+ℛ + ( + )+ ( + )+ ( + )+

In general misère play (i. e., when 𝒰 is the set of all games), G+(−G) is not equal to zero
for any nonzero G [7], but games may be invertible modulo restricted universes. Let ℰ
be the universe of dead-ending games defined by the following property: if a player is
currently unable to move in a position, then they are never subsequently able to move
in that position, even after play by the opponent. For example, John Conway’s game
Hackenbush is dead-ending, whereas Richard Guy’s Toads and Frogs is not.3 Domi-
neering is dead-ending. From [8] we know that all ends (games in which at least one
player has no move) are invertible modulo ℰ ; therefore all 1 × n domineering boards
are invertible. However, most positions are not invertible, even modulo ℰ . For exam-
ple, the game ∗, which occurs as the board in domineering, is not invertible mod-
4
ulo ℰ .
It is an open question to classify all invertible dead-ending positions. We have
found the positions given in Theorem 5 to be the only modulo-ℰ invertible domineer-

3 The position T F F has no available move for Left (Toad), but if Right (Frog) jumps, then
Left will have a move.
4 The game ∗ is not invertible in any universe containing the game “1” ( in domineering) [1].
Misère domineering on 2 × n boards | 167

ing boards with game trees of depth 2 (i. e., games of rank 2). This was determined
computationally using a recursive test from [6] to check G + (−G) for equivalence to
zero modulo ℰ , but we prove the invertibility here directly, with the definition of equiv-
alence.

Theorem 5. If G is a domineering position of rank 2 and G + (−G) ≡ℰ 0, then G is one of


the following boards or their negatives:

(1) (2) (3) (4) (5) (6)

Proof. We will show that each of these positions satisfies G + (−G) ≡ℰ 0, i. e., o(G +
−G + X) = o(X) for all X ∈ ℰ . For all other rank-2 domineering boards G, the position
G + (−G) can be distinguished from 0 with X = or X = .
(1) The position has the same game tree as + and so is actually equivalent to
zero modulo ℰ .
(2) The position has the same game tree as + , which we know to be in-
vertible as it is an end.
(3) To show − ≡ℰ 0, we will show o( − + X) = o(X) for all dead-
ending games X. Suppose Left wins X playing first (playing second follows analo-
gously). Left should follow the same strategy on − + X; if Right plays in
the − component, then Left can reply with the inverse, as all options of
are invertible, bringing that component to zero. Left then resumes winning
on X. If Right does not play in − , then when Left runs out of moves in X,
say at a left end X ′ , she should play − +X ′ to +X ′ , leaving a position
with no Left moves and at least one Right move. By the definition of dead-ending
games Left has no further moves and so wins.
(4) Because all options of are invertible, the proof for − ≡ℰ 0 is almost
identical to the proof for (3). The only additional consideration is when Left runs
out of moves in X, say at X ′ . At that point, Left should play in the , bringing
the position to + + X ′ . From here Right has at least two moves, and Left has
only one, so Left will win.
(5) The proof for − ≡ℰ 0 is similar, except that Right could play in the
− +X to − +X. Left cannot just play to − +X, because −
is not equivalent to zero. Instead, Left should take the and leave − + X. If
Right plays in − , then Left can bring that position to zero and resume winning
on X; otherwise, Left runs out of moves in X and plays − to , leaving a left
end with at least one move for Right.
(6) The proof for is nearly identical to that for .
168 | A. Dwyer et al.

3.3 Comparisons
Inequality modulo 𝒰 is defined by

G >𝒰 H if o(G + X) > o(H + X) for all X ∈ 𝒰 .

Comparability is much less common and much harder to prove in misère play than in
normal play. Even among just domineering positions, we cannot say that Left would
always prefer the zero game to the position ; there are situations in which Left would
rather have an extra move than not, including when playing first on .
In Section 2, we claimed that + ⩾ℰ . We give the justification here. It
requires a series of inequalities that build upon each other and the hand-tying princi-
ple. The hand-tying principle says that if G and H have identical Right options, and
the Left options of H are a nonempty5 subset of the Left options of G, then G ⩾ H.
This is true because if Left has a good move in H, then that same move is available
in G. Similarly, if G and H have identical Left options and the Right options of G are a
nonempty subset of the Right options of H, then G ⩾ H, because Right prefers H.
The inequalities in Proposition 1 follow from Theorem 6, a weaker version of the
3-part comparison test for ℰ proven in [6]. Let GL (GR ) denote a single Left (Right) op-
tion of G.

Theorem 6. If G, H ∈ ℰ and
(1) for every GR , there is an H R such that H R ⩽ℰ GR , and
(2) for every H L , there is a GL such that GL ⩾ℰ H L ,

then G ⩾ℰ H.

Proof. Let X ∈ ℰ and assume that Right wins G + X. We must show that Right wins
H + X. Right should follow his strategy for G + X. If at some follower H + X ′ the good
Right move in G + X ′ would be GR + X ′ , then there is an H R ⩽ℰ GR , so Right will do just
as well playing to H R + X ′ . Otherwise, if Right does not move in the H component first,
then at some point, Left moves to H L + X ′ . However, for this H L , there is a GL ⩾ℰ H L ,
so H L + X ′ is better for Right than GL + X ′ , and Right would have a winning reply to
any such GL + X ′ . So Right wins from H L + X ′ .

The following inequalities can now be verified using the hand-tying principle,
Theorem 6, and earlier inequalities.

Proposition 1. The following comparisons are true modulo ℰ , the set of dead-ending
games.
1. + ⩾ℰ (i. e., 0 ⩾ℰ )

5 This is required in misère play; in normal play, inequality holds even if this set is empty.
Misère domineering on 2 × n boards | 169

2. + ⩾ℰ
3. + ⩾ℰ
4. + ⩾ℰ

Note that “splitting” a board does not always produce a better game for Left. For
example, Left does not prefer + over ; playing each in a sum with , Left likes
better.

4 Summary and discussion


In this paper, we have shown that all 2 × n domineering boards are Right-win under
misère play after n ⩾ 12. We have also found some interesting properties for misère
domineering more generally; e. g., even among only 2 × n and n × 2 positions, there
is no predictability about the outcome of G + H based on the outcomes of G and H.
We have identified the invertible rank-2 domineering boards and have proven some
inequalities among 2 × n positions, all modulo dead-ending games.
The strategy described for 2 × n boards has Right placing two adjacent pieces at
the start of the game to guarantee that he can make a staggered move at the end of
the game. We suspect that it is also possible (and more efficient) for Right to make
the staggered move(s) right away. This would reduce the lower bounds for n, which
come directly from Right needing room to place one or two pairs of adjacent pieces
(Lemma 2). This strategy seems to work for small boards, which we have tried by hand:
Right can either box in the 2 × 3 section containing his staggered moves or can force
Left to do so, which makes the board effectively one column shorter, or if Left prevents
this, then we can still find a way for Right to force the win. However, we could not find
a general argument to show that Right can always win with this strategy.
For larger rectangular m × n boards with n > m, our intuition is that the outcome
will always skew in favor of Right if n is sufficiently larger than m. Interestingly, the
computational results for m × n boards in the Appendix, Table 10.2, suggest that 3 × n
boards become Right-win even sooner than 2 × n boards; however, there may be other
outcomes for m = 3 and n ⩾ 12 that we have not observed.
The next steps for this work would be to determine outcomes for 3 × n and larger
boards, and to consider other interactions, such as comparisons and outcomes of
sums, among small (say rank 2 or 3) domineering positions. For the latter, continued
advancements in the larger universe of dead-ending games—e. g., determining which
games are invertible—may provide interesting insights into the algebra of misère
domineering.
170 | A. Dwyer et al.

Appendix
Using our (modest) domineering program, we have the following outcomes for m × n
domineering boards under misère play:

Table 10.2: Misère outcomes for m × n domineering.

m n
2 3 4 5 6 7 8 9 10 11 ⩾ 12

2 𝒫 ℒ 𝒩 ℛ 𝒫 𝒩 𝒩 ℛ ℛ 𝒩 ℛ
3 ℛ 𝒫 𝒫 ℒ 𝒩 ℛ ℛ ℛ ℛ ℛ ?
4 𝒩 𝒫 𝒩 𝒫 𝒩 𝒩 𝒩 ℛ ? ? ?
5 ℒ ℛ 𝒫 𝒩 ℛ 𝒩 ? ? ? ? ?
6 𝒫 𝒩 𝒩 ℒ 𝒩 ? ? ? ? ? ?

Bibliography
[1] M. R. Allen, Peeking at partizan misère quotients, in R. J. Nowakowski (Ed.) Games of No
Chance 4, MSRI Publ. 63 (2015), 1–12.
[2] E. R. Berlekamp, Blockbusting and domineering, J. Combin. Theory Ser. A 49 (1988), 67–116.
[3] D. M. Breuker, J. W. H. M. Uiterwirjk, and J. van der Herik, Solving 8 × 8 domineering, Theoret.
Comput. Sci. (Math Games) 230 (2000), 195–206.
[4] N. Bullock, Domineering: Solving large combinatorial search spaces, ICGA J. 25 (2002), 67–84.
[5] M. Lachmann, C. Moore, and I. Rapaport, Who wins Domineering on rectangular boards? in R. J.
Nowakowski (Ed.) More Games of No Chance, MSRI Publ. 42 (2002), 307–315.
[6] U. Larsson, R. Milley, R. J. Nowakowski, G. Renault, and C. P. Santos, Recursive comparison
tests for dicot and dead-ending games under misère play, Integers 21B (2021), #A16.
[7] G. A. Mesdal and P. Ottaway, Simplification of partizan games in misère play, Integers 7 (2007),
#G6.
[8] R. Milley and G. Renault, Dead ends in misère play: the misère monoid of canonical numbers,
Discrete Math. 313 (2013), 2223–2231.
[9] T. E. Plambeck and A. N. Siegel, Misère quotients for impartial games, J. Combin. Theory Ser. A
115 (2008), 593–622.
[10] J. W. H. M. Uiterwirjk, 11 × 11 domineering is solved: the first player wins, in A. Plaat, W. Kosters,
J. van den Herik (Eds.) Computers and Games, Springer, Leiden (2016), 129–136.
Zachary Gates and Robert Kelvey
Relator games on groups
Abstract: We define two impartial games, the Relator Achievement Game REL and the
Relator Avoidance Game RAV. Given a finite group G and generating set S, both games
begin with the empty word. Two players form a word in S by alternately appending an
element from S ∪ S−1 at each turn. The first player to form a word equivalent in G to a
previous word wins the game REL but loses the game RAV. Alternatively, we can think
of REL and RAV as make a cycle and avoid a cycle games on the Cayley graph Γ(G, S). We
determine winning strategies for several families of finite groups including dihedral,
dicyclic, and products of cyclic groups.

1 Introduction
In this paper, we define two 2-player combinatorial games, the Relator Achievement
Game REL and the Relator Avoidance Game RAV. Given a finite group G and generating
set S, two players take turns choosing s or s−1 , where s is a generator from S. The only
stipulation is that if the previous player chose s, the next player cannot choose s−1
and vice versa. The player’s choice of group elements builds a word in S. The goal
of REL is to be the first player to achieve a subword equivalent to the identity in G.
The game of RAV is the misère version of REL, meaning the first player to achieve a
subword equivalent to the identity loses the game. One can play these games on the
Cayley graph of G formed by using the generating set S. Since paths in a Cayley graph
correspond to words in S, the players’ choices of generators form a path in the Cayley
graph without backtracking. Hence, when viewed graphically, the goal of REL is to be
the first player to make a cycle, whereas for RAV, the goal is to avoid cycles.
One motivation for the development of the games REL and RAV originated from
recent results by Ernst and Sieben [8] and also by Benesh [3, 4, 5, 6] for the combi-
natorial games GEN and DNG, which were first defined by Anderson and Harary [2]. In
these games, two players alternate choosing distinct elements from a finite group G
until G is generated by the chosen elements. The first player to generate the group on

Acknowledgement: The authors thank Spencer Hamblen, Andrew Kobin, and Mark Schrecengost for
providing helpful feedback and comments on an early draft of this paper. The authors also thank the
anonymous referee for their immensely helpful suggestions. In particular, their comments greatly im-
proved the clarity of Theorems 3.2, 4.2, and 4.6, and also led to the inclusion of Theorem 6.1.

Zachary Gates, Department of Mathematics & Computer Science, Wabash College, Crawfordsville,
Indiana, USA, e-mail: gatesz@wabash.edu
Robert Kelvey, Department of Mathematical & Computational Sciences, The College of Wooster,
Wooster, Ohio, USA, e-mail: rkelvey@wooster.edu

https://doi.org/10.1515/9783110755411-011
172 | Z. Gates and R. Kelvey

their turn wins the game GEN but loses DNG. Taking inspiration from this work, our goal
was to create a pair of games that incorporates the geometry of a group G through its
Cayley graph.
We have found in the current literature several combinatorial games on graphs, in-
cluding some on Cayley graphs. However, REL and RAV are distinct from these combina-
torial games. For example, Cops and Robbers (see [13, 14]), a popular pursuit-evasion
game, has been studied specifically on Cayley graphs (see [9]), and firefighting games
have been studied on Cayley graphs as well (see [10]). More recently, the Game of Cy-
cles was introduced by Su [16] and expounded by Alvarado et al. [1] This game involves
planar graphs and two players taking turns marking previously unmarked edges with
a chosen direction. The Game of Cycles is the closest of these combinatorial games to
REL and RAV, since the goal of the game is to create a cycle. However, the parameters
for doing so are very different from those in REL.
This paper is structured as follows. In Section 2, we give a precise definition (Def-
inition 2.1) of the games REL and RAV along with some examples and initial results
concerning complete bipartite and complete Cayley graphs (see Theorem 2.5 and The-
orem 2.6). In Section 3, we explore the family of dihedral groups Dn , n ≥ 3, with its
canonical generating sets. We show winning strategies for the game REL in Theorem 3.2
and for the game RAV in Corollary 3.10. Corollary 3.10 follows from the more general
result in Theorem 3.9, which applies to any group with a generating set including an
element of order 2. In Section 4, we explore the family of dicyclic groups with two
common generating sets. These are results Theorem 4.1, Theorem 4.2, Theorem 4.3,
and Theorem 4.4. In Section 5, we examine REL for products of cyclic groups ℤn × ℤm ,
where the results depend on n modulo m (Theorem 5.1). In Section 6, we discuss two
different n-player versions of REL and prove winning strategies for three-player REL
on the dihedral groups Dn in Theorem 6.1 and Theorem 6.3. Lastly, we conclude with
some open questions in Section 7.

2 Two-player relator games REL and RAV


Let G be a finite group, and let S be a generating set for G (with e ∉ S). We define two
two-player impartial combinatorial games, the Relator Achievement Game REL(G, S)
and the Relator Avoidance Game RAV(G, S), as follows.

Definition 2.1. On turn 1, Player 1 begins with the empty word w0 . Player 1 chooses
an element s1 ∈ S ∪ S−1 to create the word w1 = w0 s1 = s1 . The players then alternate
choosing elements of S ∪ S−1 . On turn n, with n > 1, the current player begins with a
word

wn−1 = s1 s2 . . . sn−1 .
Relator games on groups | 173

They then select a generator sn ∈ S ∪ S−1 such that sn ≠ s−1


n−1 and form the word wn :

wn = wn−1 sn .

If a player forms wn such that wn ≡G wk , that is, wn and wk represent the same ele-
ment of G, for some k, 0 ≤ k < n, then that player wins REL(G, S) and loses RAV(G, S),
respectively. If from any position there are no legal moves, then the next player loses.
Otherwise, play passes to the next player and continues as described above.

Remark 2.2. When the group and generating set are clear from context, we will use
the shorthand REL or RAV to refer to the Relator Achievement Game or the Relator
Avoidance Game, respectively, for a group G and generating set S.
We forbid the trivial relator ss−1 in our games since every group contains these
relators, and we seek nontrivial relators. We also assume in our definition that a gen-
erating set S does not contain the identity for similar reasons.
For the trivial group and the cyclic group of order 2, with their canonical gener-
ating sets, both games end due to the eventual absence of a legal move. These are, in
fact, the only groups where this occurs.

Recall that the Cayley graph Γ(G, S) for a group G and generating set S is a graph
with vertices the elements of G and a directed edge from vertex g to vertex h if h = gs
for some s ∈ S. Such an edge would be labeled by s.
If we consider a path of edges in the Cayley graph Γ(G, S), then this will correspond
to a word w = s1 s2 . . . sn−1 sn in G with letters in S ∪ S−1 . Therefore we can visually
play the games of REL and RAV on a Cayley graph: a players’ choice of element sn ∈
S ∪ S−1 will correspond to traversing an undirected edge in the Cayley graph. A player
wins REL if they are the first to form a cycle (a relator) in the Cayley graph. Likewise,
a player loses RAV if they are the first to form a cycle in the Cayley graph. The rule
stating that a player may not choose the inverse of the last generator chosen translates
to disallowing backtracking in the Cayley graph.
We mention this visual Cayley graph correspondence as a useful way to analyze
the games REL and RAV. It can be helpful to play these games on a Cayley graph to
understand the winning strategies for different groups and generating sets. Note that
due to how players choose elements from S ∪ S−1 whenever we discuss Cayley graphs,
we refer to the undirected Cayley graph.

Example 2.3. Consider REL(ℤn , {1}), where ℤn denotes the additive group of integers
modulo n with n > 2. The corresponding Cayley graph for (ℤn , {1}) is an n-sided poly-
gon. Hence the games REL(ℤn , {1}) and RAV(ℤn , {1}) are completely determined by the
parity of n. If n is even, then Player 2 will win REL, and Player 1 will win RAV. If n is odd,
then Player 1 will win REL, and Player 2 will win RAV.

Example 2.4. Consider the quaternion group Q8 with generating set S = {i, j}. We can
investigate the game REL(Q8 , S) by means of the Cayley graph Γ(Q8 , S). See Figure 11.1,
174 | Z. Gates and R. Kelvey

where the labels i and j are denoted by blue and red, respectively. Note that Γ(Q8 , S) is
a complete bipartite graph; that is, the vertices can be partitioned into two sets A and
B such that for any two vertices a ∈ A and b ∈ B, there is an edge joining a and b, and
for any two elements from the same set, there is no edge between them. In this case,
we have A = {±1, ±k} and B = {±i, ±j}. The set B is shaded in Figure 11.1.
To determine a winning strategy for REL(Q8 , S), note that Player 1 must choose
from the set B on their first turn. Player 2 cannot backtrack to 1 and so must move
to a vertex from A − {1}. Next, Player 1 moves to another vertex from B distinct from
their previous choice and thus cannot win on this turn. Finally, Player 2 wins on their
second turn by moving back to 1.

1 i

−1 −i

−j −k

j k

Figure 11.1: Cayley Graph for Q8 with generating set {i, j}.

In general, if a group G contains an index 2 subgroup H and we let S = G − H, then


Γ(G, S) is complete bipartite. In this case, we have the following theorem.

Theorem 2.5. If G is a finite group of order 2n with n ≥ 2, and S is a generating set


such that Γ(G, S) is complete bipartite, then Player 2 wins REL(G, S), and Player 1 wins
RAV(G, S).

Proof. The proof that Player 2 wins REL(G, S) follows the same argument as in Exam-
ple 2.4.
For RAV(G, S), the game is one of exhaustion. If |G| = 2n, then Player 1 has n possi-
ble vertices to move to on their first turn, whereas Player 2 has n − 1 options due to the
game starting at the identity. In general, Player 1 has n−k vertex options after their kth
turn, whereas Player 2 has n − k − 1 vertex options after their kth turn. These options
always exist because Γ is complete bipartite. Hence Player 2 will exhaust their options
before Player 1, and thus Player 1 wins RAV(G, S).

For any nontrivial finite group G, if we let S = G − {e}, then Γ(G, S) is a complete
graph. Such a case is also easy to analyze.
Relator games on groups | 175

Theorem 2.6. If G is a finite group of order at least 3, and S is a generating set such that
Γ(G, S) is a complete graph, then Player 1 wins REL(G, S). Player 1 wins RAV(G, S) if |G| is
even, and Player 2 wins RAV(G, S) if |G| is odd.

Proof. If Γ(G, S) is complete and |G| ≥ 3, then Player 1 wins REL(G, S) on their sec-
ond turn by moving back to e since Player 2 may not backtrack to e on their first
turn. RAV(G, S) is a game of exhaustion as in the complete bipartite case. If |G| is even,
then Player 1 will complete a Hamiltonian path in Γ(G, S) on turn |G| − 1 and thus win
RAV(G, S) since Player 2 will have no available moves on the next turn. If |G| is odd,
then Player 2 wins by completing a Hamiltonian path for the same reason.

Although generating sets that yield complete bipartite or complete Cayley graphs
allow for quick analysis of REL and RAV, they are rarely canonical generating sets for
groups. In this sense, Q8 is an outlier with its canonical generating set yielding a com-
plete bipartite Cayley graph.
We close this section with an answer to a natural question. Suppose two groups G
and H have isomorphic undirected Cayley graphs Γ(G, S) and Γ(H, T). Are the games
REL and RAV the same for both groups? The answer is yes. If a winning strategy dictates
a player move along the edge from g to gs in Γ(G, S), then the same player has a winning
strategy on the other group by moving along the corresponding edge in Γ(H, T). We
state this explicitly as the following theorem.

Theorem 2.7. Suppose Γ(G, S) and Γ(H, T) are isomorphic as undirected Cayley graphs.
A player has a winning strategy for REL(G, S) (respectively, RAV(G, S)) if and only if that
player has a winning strategy for REL(H, T) (respectively, RAV(H, T)).

We provide an example of this result in Example 3.1 at the beginning of the next
section.

3 Dihedral groups
For the dihedral groups Dn of order 2n with n ≥ 3, there are two common generating
sets: one is the Coxeter generating set composed of two reflections; the other is com-
posed of one reflection and one rotation. First, we examine the Coxeter generating set.

Example 3.1. Suppose S = {s, t} is a Coxeter generating set for the dihedral group Dn ,
that is,

Dn = ⟨s, t | s2 = t 2 = (st)n = e⟩.

In this case the games REL(Dn , S) and RAV(Dn , S) have the same outcomes as
REL(ℤ2n , {1}) and RAV(ℤ2n , {1}) since the undirected Cayley graphs Γ(ℤ2n , {1}) and
Γ(Dn , {s, t}) are isomorphic (see Theorem 2.7).
176 | Z. Gates and R. Kelvey

Hence we focus our attention for the rest of this paper on the following presenta-
tion for the dihedral groups:

Dn = ⟨r, s | r n = s2 = rsrs = e⟩.

3.1 REL(Dn , {r, s})


In this section, we investigate the Relator Achievement Game on Dn with generating
set {r, s}. Note that each element of Dn can be written uniquely as r i sj for some integers
i and j such that 0 ≤ i ≤ n − 1 and 0 ≤ j ≤ 1.

Theorem 3.2. If n is odd, then Player 1 has a winning strategy for REL(Dn , {r, s}). If n
is even, then Player 1 has a winning strategy if n ≡ 2 mod 6, whereas Player 2 has a
winning strategy otherwise.

Before we begin the proof, we provide some remarks and an auxiliary lemma.

Remark 3.3. The Cayley graph Γ(Dn , {r, s}) contains n “squares”, each corresponding
to the relation rsrs = e. Given the normal form r i sj , where 0 ≤ i ≤ n − 1 and 0 ≤
j ≤ 1, we number the squares in increasing order by i. Square 1 contains {e, s, r, rs},
Square n contains {r n−1 , r n−1 s, e, s}, and, in general, Square i contains {r i−1 , r i−1 s, r i , r i s}.
See Figure 11.2 and Figure 11.3.

r i−1 ri

r i−1 s ri s

Figure 11.2: Square i in Dn .

Remark 3.4. If two edges of a square have already been traversed, then neither player
will move along a third edge of that square unless it is a winning play since traversing
a third edge sets up the opposing player to win on their next turn.

Because of the previous two remarks, once the first r or r −1 edge is chosen, the
players will move in one direction, clockwise or counterclockwise, along the Cayley
graph until a cycle is completed.

Remark 3.5. If a player chooses s, then the next two moves (if they exist) are both
determined by Remark 3.4 and therefore must either both be r or both be r −1 .
Relator games on groups | 177

5 s 1
r4 r
r4 s rs

4 2
3 2
r s r s
3
r3 r2

Figure 11.3: Γ(D5 , {r, s}) with squares 1 through 5.

We now introduce a definition that will be useful in the proof.

Definition 3.6. We say that a player enters Square i at vertex g ∈ {r i−1 , r i , r i−1 s, r i s} on
turn k if their choice of sk ∈ S ∪ S−1 yields wk ≡G g, and if wj ≡G h for some j < k, then
h ∉ {r i−1 , r i , r i−1 s, r i s}.

When the context is clear, we will state that a player has entered a square without
referring to the specific turn.
The following lemma will be used in the proof of Theorem 3.2. Note that the ap-
pearance of the modulo 6 condition in Theorem 3.2 is due to this lemma.

Lemma 3.7. For REL(Dn , {r, s}), suppose all moves have occurred on squares 1 through
k − 3 for some k such that 4 ≤ k ≤ n. If 5 ≤ k ≤ n and a player enters square k − 3 at the
vertex r k−4 s, then that player can guarantee entering square k at vertex r k−1 . If 4 ≤ k ≤ n
and a player enters square k − 3 at vertex r k−4 , then that player can guarantee entering
square k at vertex r k−1 s.

Proof. We assume that all prior moves have occurred on squares 1 through k − 3 and
that 5 ≤ k ≤ n. Let A, B ∈ {1, 2} with A ≠ B. Suppose that Player A enters square k − 3 at
the vertex r k−4 s, which must be done via a choice of r −1 . We then have two cases since
Player B may either play r −1 to move to r k−3 s or s to move to r k−4 .
If Player B chooses r −1 , then Player A will follow by choosing s to move to r k−3 .
The next two moves are then forced by Remark 3.5 if both players are to avoid making
the third edge on a square. Hence Player B will move to r k−2 , and Player A will move to
r k−1 , entering square k at this vertex.
If Player B chooses s, then the next two moves are forced, so Player A moves to
r k−3 , and Player B moves to r k−2 . Player A then has the option to choose r and enter
square k at r k−1 .
Now suppose 4 ≤ k ≤ n and that Player A enters square k − 3 at vertex r k−4 . For
k = 4, we note that the player enters at r 0 = e on the 0th move of the game. For k ≥ 5,
this must be done by playing r. The rest of the proof is similar to the previous case.
178 | Z. Gates and R. Kelvey

Remark 3.8. Allowing for repeated use of Lemma 3.7, we can effectively expand our
options for moves to the set {r, r −1 , s, t(α), u(α)} for α ≥ 1 instead of {r, r −1 , s}, where
r i sj t(α) = r i+1+3α sj+α and r i sj u(α) = r i+3α sj+α . Note that u(α) is a move available only to
Player 2 from the case k = 4 in Lemma 3.7 by entering Square 1 at r 0 = e on their 0th
move. We further note that by the proof of Lemma 3.7 the position prior to r i+1+3α sj+α
is r i+3α sj+α .

Proof of Theorem 3.2. First, we suppose without loss of generality that r is played be-
fore r −1 . A consequence of this and Remark 3.4 is that i is nondecreasing in the normal
form r i sj , where 0 ≤ i ≤ n − 1 and 0 ≤ j ≤ 1. Suppose n is odd. Then Player 1’s strategy
is to play r from vertices r k where 0 ≤ k ≤ n − 1 and r −1 from vertices r k s (see Figure 11.4
for an example). Since elements of S ∪ S−1 change the parity of exactly one of i and j
in the normal form by exactly one, we note that Player 1 moves only to elements r a sb
where a + b is odd, whereas Player 2 moves only to elements r c sd where c + d is even.
By Player 1’s strategy the power of r is strictly increasing after every two moves. Hence
we will eventually move to r n s0 = e or r n s1 = s. If the game moves to r n s0 = e, then
the game is over, and Player 1 wins since n + 0 is odd. If the game moves to r n s1 , then
Player 1 moves next since n + 1 is even. Then Player 1 plays s to win at e.

r6 r

r6 s rs

r5 s r2 s
r5 r2
4 3
r s r s

r4 r3

Figure 11.4: Example of Player 1 strategy for REL(D7 ) as described in Theorem 3.2. Player 1’s moves
are colored “red”, and Player 2’s moves are colored “blue”. Player 1 only needs to choose r or r −1
generators to reach a winning position. Note that when Player 2 chooses an s, the next two moves
are forced by Remark 3.4. Regardless of Player 2’s next move, Player 1 will win the game.
Relator games on groups | 179

Now suppose n is even. If Player 1 begins by playing r, then Player 2 wins by the same
strategy as in the case where n is odd. Hence we may assume that Player 1 begins by
playing s. In this case a player wins by moving to r n = e or r n s = s. Repeated use of
Lemma 3.7 allows us to expand our move set to {r, r −1 , s, t(α), u(α)} with t(α) and u(α) as
defined in Remark 3.8. We recall from Remark 3.8 that the position immediately prior
to r i+3α sj+α is r i+3α−1 sj+α . We now split into three cases:
1. Suppose that n = 6k + 2 for some k (see Figure 11.5). Player 1 first moves to s, and
Player 2 is forced, without loss of generality, to move to rs. Player 1 can now play
t(2k) to move from rs to r 1+1+3(2k) s1+2k = r 6k+2 s = r n s = s with the penultimate
position being r n−1 s. Thus Player 1 wins.
2. Suppose that n = 6k for some k. Player 2 effectively moves to r 0 s0 = e on the 0th
move, which is the start of Player 2 using u(2k) to move to r 3(2k) s2k = r 6k s0 = r n = e,
with the penultimate position being r n−1 . Thus Player 2 wins.
3. Suppose that n = 6k + 4 for some k. We assume that Player 1 moves to s. Then
Player 2 uses t(2k + 1) to move from s to r 1+3(2k+1) s1+(2k+1) = r 6k+4 s2k+2 = r n = e, with
the penultimate position being r n−1 . Thus Player 2 wins.

e
r n−1

r n−2 .. r
.
r n−1 s s

r n−2 s rs
..
.
r n−3 r n−3 s

r n−4 s
..
.

r n−4
..
.

Figure 11.5: Portion of a general Cayley graph for Dn with moves as in Theorem 3.2, the even case
with n ≡ 2 mod 6. By Lemma 3.7, if Player 1 enters square n − 2 at r n−3 , then they can guarantee
reaching vertex s.
180 | Z. Gates and R. Kelvey

r9 e

r8 r
9
r s s
8
r s rs

r7 r7 s r2 s r2

r6 s r3 s
r5 s r4 s

r6 r3

r5 r4

Figure 11.6: Example of Player 1 winning strategy for RAV(D10 ). Player 1 moves are colored “red”, and
Player 2 moves are colored “blue”. Player 1’s strategy is to always choose the generator s.

3.2 RAV for groups with an order two generator


In contrast with the achievement game for dihedral groups (Theorem 3.2), we have that
Player 1 has a winning strategy for RAV(Dn , {r, s}) for any n ≥ 3. This strategy involves
the formation of a Hamiltonian path in the Cayley graph (see Figure 11.6) by Player 1
always choosing the generator s. In fact, this strategy can be generalized to RAV(G, S)
for any group G with generating set S containing an element of order 2.

Theorem 3.9. Let G be a finite group with generating set S containing an element s of
order 2. Then Player 1 has a winning strategy for the game RAV(G, S).

Proof. The winning strategy of Player 1 is to always choose the order two generator s.
Since Player 2 can never choose s due to backtracking, they are forced to choose an-
other element of S.
We first show that a choice of s exists on each turn for Player 1. Indeed, suppose it
is Player 1’s turn and no such choice is available. Let v denote the vertex in the Cayley
graph Γ(G, S) representing this point in the game. Because Player 1 has no choice of s
available, this means that the edge labeled s from vertex v has been traversed previ-
ously. But then the vertex v must have been visited previously, meaning that Player 2’s
last move arriving at v was in fact a losing move for Player 2. Hence, if the choice of
generator s is not available for Player 1, then Player 2 already lost the game.
We now show that Player 1’s strategy is a winning strategy. Suppose for contradic-
tion that Player 1 choosing s to move from the vertex v to the vertex w is a losing move;
that is, this forms the first cycle in the Cayley graph. This means that w has previously
Relator games on groups | 181

been visited. In the case that Player 2 reached w the previous time, Player 1’s strategy
implies that they would move to v via choosing s. Hence Player 2 actually formed a
cycle by moving to v for the second time, a contradiction.
In the case that Player 1 reached w the previous time, it was from the vertex v, so
Player 2 again must have formed a cycle by moving to v for the second time.

Corollary 3.10. Player 1 has a winning strategy for RAV(Dn , {r, s}) for any n ≥ 3.

Example 3.11. Let H be a finite group with generating set T, and let {e, s} = ⟨s⟩ ≅ ℤ2
be a cyclic group of order two with generator s. Suppose G = H ⋊ ℤ2 with canonical
generating set

S = (T × {e}) ∪ {(eH , s)}.

Then Theorem 3.9 implies that Player 1 has a winning strategy for RAV(G, S) by always
choosing the generator (eH , s). This applies in particular to the family of generalized
dihedral groups, which are defined as the groups G = H ⋊ ℤ2 where H is an Abelian
group and the action of ℤ2 on H is that of inversion.

Remark 3.12. Suppose that G is a group of even order. Then G must contain an element
of order two. It follows from Theorem 3.9 that there exists a generating set S for which
Player 1 has a winning strategy for the game of RAV(G, S).

4 RAV and REL for dicyclic groups


4.1 Dicyclic groups with generating set {a, x}
The dicylic group Dicn , n ≥ 2, of order 4n is most commonly written via the following
presentation:

Dicn = ⟨a, x | a2n = x4 = x−1 axa = e⟩.

From the defining relations we can show that any g ∈ Dicn can be written in a nor-
mal form ai xj with 0 ≤ i < 2n and j ∈ {0, 1}, and with the following relations:

ak aℓ = ak+ℓ
ak aℓ x = ak+ℓ x
ak xaℓ = ak−ℓ x
ak xaℓ x = ak−ℓ+n .

The winning strategy for the game RAV(Dicn , {a, x}) is similar to that found in The-
orem 3.9. We note here that the generator x is not of order two; however, it plays a
182 | Z. Gates and R. Kelvey

similar role to that of the order-two generator from Theorem 3.9; that is, in the normal
form for elements of Dicn , the possible powers of x are either 0 or 1. Hence, although
x has order four, it acts like an element of order two in the normal form.

Theorem 4.1. Player 1 has a winning strategy for RAV(Dicn , {a, x}).

Proof. Note that for n = 2, we have Dic2 = Q8 . Hence this case is covered by Exam-
ple 2.4. For the remainder of the proof, suppose n ≥ 3. See Figure 11.7 for a Cayley
graph of Dic4 .

e a

a7 a2

a6 a3

a5 a4

Figure 11.7: Cayley graph for Dic4 with generators a and x. The “blue” edges correspond to the gen-
erator a and the “red” edges to the generator x. On the inner octagon, if the vertices are labeled x,
ax, a2 x, . . . , a7 x in a clockwise order, then a choice of the generator a will move a player counter-
clockwise.

Using the normal form described above, Player 1 has a winning strategy by moving
on each of their turns from ak x to ak by choosing x −1 or from ak to ak x by choosing x,
where k is an integer such that 0 ≤ k < 2n. Such a move is always available to Player 1,
which can be shown via an inductive argument. We will show explicitly the base case
and Player 1’s second turn.
For the base case, Player 1 starts with a choice of x, that is, they move from e = a0
to a0 x = x.
For Player 1’s second turn, based off Player 2’s choice of generator, we have three
possible game words:

xa ≡G a2n−1 x, x2 ≡G an , xa−1 ≡G ax.


Relator games on groups | 183

Thus Player 1 can choose x−1 , x, or x−1 , respectively, resulting in

xax−1 ≡G a2n−1 , x3 ≡G an x, xa−1 x −1 ≡G a,

and none of these vertices has been previously visited.


Now assume that for the first m turns, Player 1’s strategy above has been success-
fully employed. We want to show that on Player 1’s next turn, i. e., on turn m + 2 of the
game, a move from ak x to ak via a choice of x−1 or a move from ak to ak x by choice of
x is possible.
Suppose we are in the first case and wm+1 ≡G ak x. Assume that Player 2 chose x
on their last turn. This means Player 2’s choice of x moved from ak to ak x. Since we
assume that Player 1’s strategy has been successfully employed for the first m turns,
Player 1 already made the move from ak x to ak via choosing x −1 on the mth turn. Hence
Player 2’s choice of x is itself an illegal move. Similarly, we can show that Player 1’s
strategy is possible if wm+1 ≡G ak .
Knowing that Player 1’s strategy is always possible, we now show that this strategy
is a winning strategy. Note that Player 1 will always move from a word equivalent to
ai xϵ to ai x1−ϵ , and vice versa, with ϵ ∈ {0, 1}. Now suppose that Player 1 loses the game
at a word wm ≡G ak xϵ for some k and m. This means that wℓ ≡G ak x ϵ for some 0 ≤
ℓ < m. By Player 1’s strategy, wm−1 ≡G ak x1−ϵ , but then it must also be true that wℓ−1 ≡G
ak x1−ϵ if Player 1 moved to wℓ or wℓ+1 ≡G al x1−ϵ if Player 1 previously moved from wℓ .
In either case, Player 2 must have already lost the game at wm−1 .
In Figure 11.8, we give a simplified partial Cayley graph for Dicn , labeled with re-
spect to the generating set {a, x}, that may provide a visual aid for the proof of Theo-
rem 4.2. In Figure 11.8 the inner and outer 2n-gons are given by concentric circles. In-
stead of drawing all edges, note that a choice of the generator a moves one clockwise
on the outer circle but counterclockwise on the inner circle. A choice of the generator
x or x−1 will move one from the inner to outer circle or vice versa.

Theorem 4.2. Player 1 has a winning strategy for REL(Dicn , {a, x}) for odd n.

Proof. Player 1 begins by choosing a and continues to do so until Player 2 chooses


anything other than a. We will show that after Player 2 chooses a move from {x, x −1 },
they will either lose or must choose from {x, x−1 } again. This allows for an inductive
argument showing that Player 1 will necessarily win the game.
First, note that if Player 2 continues to only choose a, then Player 1 lands at ar with
r odd, whereas Player 2 lands at ar , with r even. By examining the sequence of moves
found in Table 11.1 we see that Player 1 will win if Player 2 only chooses a. Moreover, any
move by Player 2 from an−2 will yield a Player 1 win. Hence we may assume that Player 2
chooses x to move from aℓ0 to aℓ0 x or chooses x−1 to move from aℓ0 to aℓ0 x −1 = an+ℓ0 x
for some odd integer ℓ0 such that 1 ≤ ℓ0 ≤ n − 4.
184 | Z. Gates and R. Kelvey

x aℓi
a2n−1 x

aℓi x
an+ki+1
aki

ali+1
an+ki
an+ℓi x an−2 x aki+1

an−1 x
an+ℓi an x

an−2

an−1
an

Figure 11.8: A simplified way to visualize the Cayley graph of Dicn , which is composed of two 2n-
gons, represented here by concentric circles. The generator a moves one clockwise on the outer
circle but counterclockwise on the inner circle. The generator x moves one from the inner to outer
circle or vice versa.

We now show that Player 2 must eventually choose again from {x, x −1 } to arrive at either
ak0 or an+k0 for some even integer k0 such that ℓ0 + 1 ≤ k0 ≤ n − 1. This is broken into
two cases depending on Player 2’s move from aℓ0 .

1. Suppose Player 2 chooses x to move from aℓ0 to aℓ0 x. Then Player 1 will choose a−1
until Player 2 chooses anything other than a−1 .
Should Player 2 only choose a−1 , then Player 2 arrives first at an−2 x since ℓ0 is odd.
Then Player 1 moves to an−1 x. By Table 11.2 we see that Player 2 is forced to choose
x−1 to arrive at an−1 . Then Player 1 can move first from an−1 to an , and by Table 11.2
again, Player 2 must move to an+1 to prevent losing. Player 1 will continue to choose
a until Player 2 chooses anything other than a. If play continues in this way, then
Player 2 will be the first to reach the vertex an+ℓ0 , since ℓ0 is odd. Then Player 1
would win by playing x−1 to move to aℓ0 x. Hence, for some even m with 2 ≤ m ≤
l0 − 1, Player 2 must choose xϵ with ϵ ∈ {±1} to move from an+m to an+m x ϵ . From
here Player 1 chooses xϵ to arrive at am in either case. Because 2 ≤ m ≤ ℓ0 − 1,
Player 1 will win.
Relator games on groups | 185

Table 11.1: The sequence of moves if Player 2 moves to an−3 . Then Player 1 moves to an−2 via a. Re-
gardless of how Player 2 proceeds from an−2 , Player 1 ends up back at e or an−2 to win.

Player 2 Moves Player 1 Chooses If P2 Chooses Then P1 Wins By


−1
a x a x
an−2 󳨀→ an−1 󳨀→ an−1 x 󳨀→ an x 󳨀→ an x 2 = a2n = e
−1
a x
󳨀→ an−2 x 󳨀→ an−2
x a
󳨀→ a2n−1 󳨀→ a2n−1 a = a2n = e

x a−1 a−1 x
an−2 󳨀→ an−2 x 󳨀→ an−1 x 󳨀→ an x 󳨀→ an x 2 = a2n = e
x −1 a−1
󳨀→ an−1 󳨀→ an−2
x a
󳨀→ a2n−1 󳨀→ a2n−1 a = a2n = e

x −1 a−1 a−1 x −1
an−2 󳨀→ a2n−2 x 󳨀→ a2n−1 x 󳨀→ a2n x = x 󳨀→ e
x a−1
󳨀→ an−1 󳨀→ an−2
x −1 a
󳨀→ a2n−1 󳨀→ a2n = e

Table 11.2: The first row details the sequence of possible moves should Player 2 move to an−2 x. The
second row is a continuation of the third move sequence in row one. Note that Player 1 will either
win or move to an+2 via choosing a.

Player 1 Moves Player 2 Chooses P1 Moves First To


−1 −1
a a x
an−2 x 󳨀→ an−1 x 󳨀→ (an−1 x)a−1 = an x 󳨀→ an x 2 = a2n = e
x n−1 2 2n−1 a
󳨀→ a x =a 󳨀→ a2n = e
−1
x a
󳨀→ (an−1 x)x −1 = an−1 󳨀→ (an−1 )a = an
a a a
an−1 󳨀→ an 󳨀→ an+1 󳨀→ an+2
x x
󳨀→ an x 󳨀→ an x 2 = e
x −1 x −1
󳨀→ an x −1 = x 󳨀→ xx −1 = e

Hence Player 2 loses unless for some even k0 such that ℓ0 + 1 ≤ k0 ≤ n − 3 ≤ n − 1,


they choose either x to move from ak0 x to ak0 x2 = an+k0 or x −1 to move from ak0 x to
(ak0 x)x−1 = ak0 .
2. Suppose Player 2 chooses x−1 to move from aℓ0 to an+ℓ0 x. Then Player 1 will choose
a−1 until Player 2 chooses anything other than a−1 . Should Player 2 only choose
a−1 , then they will be the first to arrive at a2n x = x because n + ℓ0 is even. From
here Player 1 chooses x−1 to win at xx−1 = e.
Hence Player 2 loses unless for some even k0 such that ℓ0 + 1 ≤ k0 ≤ n − 1,
they choose from {x, x−1 } to move from an+k0 x to an+k0 x 2 = a2n+k0 = ak0 or to
(an+k0 x)x−1 = an+k0 .
186 | Z. Gates and R. Kelvey

We have thus shown above our base case: Player 2 must move to ak0 or an+k0 for some
even k0 such that ℓ0 + 1 ≤ k0 ≤ n − 1, where 1 ≤ ℓ0 ≤ n − 4. Now let us assume that
Player 2 has moved to aki or an+ki for some even ki such that ℓi + 1 ≤ ki ≤ n − 1, where
ℓi is an odd integer satisfying 1 ≤ ℓi ≤ n − 4.
– Suppose Player 2 moves to aki ; then Player 1 chooses a to move to aki +1 . Note that if
ki = n − 3, then Player 1 has a winning sequence of moves by Table 11.1. If ki = n − 1,
then by Table 11.2 Player 2 is forced to move to an+1 , and Player 1 will win by the
same argument in (1). Otherwise, we obtain an odd ℓi+1 such that ki +1 ≤ ℓi+1 ≤ n−4.
By the same reasoning as above, we can obtain an even ki+1 such that ℓi+1 + 1 ≤
ki+1 ≤ n − 1.
– If Player 2 moves instead to an+ki , then Player 1 will continue to play a until Player 2
chooses from {x, x−1 }. Because ki is even and n is odd, Player 1 will be the first to
move to an+n = e if ki = n − 1, or Player 2 only chooses a. Otherwise, Player 2
chooses from {x, x−1 } to move from an+ℓi+1 to an+ℓi+1 x or an+ℓi+1 x −1 = aℓi+1 x for some
odd ℓi+1 such that ki + 1 ≤ ℓi+1 ≤ n − 4 ≤ n − 2. Similarly to the previous cases (1)
and (2), we will show that Player 2 must move to aki+1 or an+ki+1 for some even ki+1
such that ℓi+1 + 1 ≤ ki+1 ≤ n − 1.
– Suppose Player 2 chooses x to move to an+ℓi+1 x for some odd ℓi+1 such that
ki + 1 ≤ ℓi+1 ≤ n − 2. Then Player 1 will continue to play a−1 until Player 2
chooses from {x, x−1 } or Player 2 reaches an+n x = x, in which case Player 1
wins at e by playing x−1 . Hence Player 2 moves to either (an+ki+1 x)x = aki+1 or
(an+ki+1 x)x−1 = an+ki+1 for some even ki+1 such that ℓi+1 + 1 ≤ ki+1 ≤ n − 1.
– Suppose Player 2 chooses x−1 to move to aℓi+1 x for some odd ℓi+1 such that
ki + 1 ≤ ℓi+1 ≤ n − 2. Then Player 1 continues to play a−1 until Player 2 chooses
from {x, x−1 } or until Player 1 reaches an−1 x. Player 1 reaches an−1 x first because
both ℓi+1 + 1 and n − 1 are even. Note that by Table 11.2, if Player 1 moves to
an−1 x, then either Player 1 wins, or Player 2 moves to an+1 . Then Player 1 can
eventually move to either an+ℓ0 x or an+ℓ0 x −1 = aℓ0 , or Player 1 can move to
an+m xϵ xϵ = am after Player 2 moves to an+m x ϵ for some ϵ ∈ {±1} and 2 ≤ m ≤
ℓ0 − 1. In either case, one of an+ℓ0 x, aℓ0 , or am with 2 ≤ m ≤ ℓ0 − 1 has been
visited previously, and hence Player 1 wins. Hence, as in the previous case,
we can assume that Player 2 moves from aki+1 x to either (aki+1 x)x = an+ki+1 or
(aki+1 x)x−1 = aki+1 for some even ki+1 such that ℓi+1 + 1 ≤ ki+1 ≤ n − 1.

We have shown that either Player 1 wins or we can generate a strictly increasing se-
quence of positive integers (ki ) satisfying ki ≤ n − 1 for all i, since ki < ℓi+1 < ki+1 . As
the set of positive even integers less than or equal to n − 1 is finite, there must exist an
integer j such that kj = n − 1. Thus Player 2 eventually arrives at either a2n−1 or an−1 .
The choice of a to move from a2n−1 to a2n = e is a win for Player 1. By Table 11.2 Player 1
can choose a to move from an−1 to an , which, as argued previously, leads to a Player 1
win. Therefore we conclude that Player 1 wins REL(Dicn , {a, x}) for odd n ≥ 3.
Relator games on groups | 187

Theorem 4.3. Player 2 has a winning strategy for REL(Dicn , {a, x}) when n is even.

Proof. Player 2 has a winning strategy via mirroring Player 1; that is, if Player 1 chooses
a generator s on their turn, Player 2 will follow with s on their turn. Since x 2 = (x −1 )2 =
an and n is even, we note that this strategy implies that Player 1 only lands at aℓ , where
ℓ is odd, or at ak x, where k is even. Meanwhile, Player 2 will only land at ak with even k.
To show that Player 2 has a winning strategy, we assume for contradiction that
Player 1 has a winning strategy. There are two cases: Player 1 can win either at aℓ for ℓ
odd or at ak x for k even as stated above.
First, suppose that Player 1 wins at aℓ with odd ℓ. The first time that Player 1 ar-
rived at aℓ must have been from either aℓ−1 or aℓ+1 . By Player 2’s mirroring strategy,
Player 2 would have then moved to the other. Thus, upon reaching aℓ for the second
time, Player 1 must have moved from aℓ−1 or aℓ+1 , both of which would have been vis-
ited for a second time. Hence Player 2 won the previous turn, a contradiction.
Now suppose that Player 1 wins at ak x with even k. The first time that Player 1
arrived at ak x must have been from ak or an+k . Player 2 then would have moved to
the other. Thus, upon reaching ak x for the second time, Player 1 would have again
moved from ak or an+k , both of which would have been visited for a second time. Hence
Player 2 won the previous turn, a contradiction.

4.2 Dicyclic groups with triangle presentation


There is another common presentation for the dicyclic groups, namely, as an instance
of a triangle group or binary von Dyck group:

Dicn = ⟨a, b, c | an = b2 = c2 = abc⟩.

Note that the triangle presentation for Dicn is isomorphic to that given in Sec-
tion 4.1 via the mapping

a 󳨃→ a, x 󳨃→ b−1 , ax −1 󳨃→ c.

Hence we can describe group elements via the normal form ai bj with 0 ≤ i < 2n and j ∈
{0, 1}. For the proofs that follow, we will make use of this normal form. See Figure 11.9
for an example of the Cayley graph of Dic4 with generating set {a, b, c}.
For the game of RAV(Dicn , {a, b, c}), we have the same result as Theorem 4.1.

Theorem 4.4. Player 1 has a winning strategy for RAV(Dicn , {a, b, c}).

Proof. Player 1 has a winning strategy by choosing b on their first turn and then mov-
ing from ak b to ak by choosing b−1 or moving from ak to ak b by choosing b on their
subsequent turns. The only addition to the previous argument of Theorem 4.1 is ac-
counting for the generator c. Because c = ab, we have bc = bab ≡Dicn an−1 and
188 | Z. Gates and R. Kelvey

e a

a7 a2

a6 a3

a5 a4

Figure 11.9: Cayley graph for Dic4 with generators a, b, and c. The “blue” edges correspond to the
generator a, the “red” to the generator b, and the “green” edges to c. We have a normal form ai bj
with 0 ≤ i < 2n and j ∈ {0, 1}, and the inner vertices are labeled b, ab, ab2 , . . . , ab7 in a clockwise
manner.

bc−1 = b(b−1 a−1 ) ≡Dicn a2n−1 . Note that in both cases, Player 1 can choose b on their
next turn to move to an−1 b or a2n−1 b, respectively, thus extending the argument given
in Theorem 4.1 for Player 1’s second turn. The rest of the proof follows exactly as in
Theorem 4.1.

Despite the addition of the third generator, we can see that Player 2 has the same
winning strategy for REL(Dicn , {a, b, c}) as they did for REL(Dicn , {a, x}) when n is even.

Theorem 4.5. Player 2 has a winning strategy for REL(Dicn , {a, b, c}) with even n.

The same argument as described in Theorem 4.3 applies here. Recall that we can
describe every element of Dicn via the normal form ai bj where 0 ≤ i < 2n and j ∈ {0, 1}.
From Theorem 4.3 we know that Player 1 can arrive at words equivalent to ak b for k
even and aℓ for ℓ odd. However, with the addition of the generator c, Player 1 can
also arrive at words equivalent to aℓ b for ℓ odd. Player 2 still can only arrive at words
equivalent to ak for k even. We leave the remaining details to the reader.
As opposed to the game REL(Dicn , {a, x}), Player 2 has a winning strategy for all
n ≥ 2 by Theorem 4.6. Note that we can still use Figure 11.8 as a visual aid for the proof
of Theorem 4.6 by replacing x with b and using that c = ab.
There are several relators of length three in Dicn , which will be used throughout
the proof of Theorem 4.6. These have been collected into Table 11.3 for reference.

Theorem 4.6. Player 2 has a winning strategy for REL(Dicn , {a, b, c}) with odd n.
Relator games on groups | 189

Table 11.3: Twelve relators of length three in Dicn with generating set {a, b, c}. We refer to these rela-
tors as triangle relators because in the Cayley graph of Dicn , they form triangles.

Triangle Relators

abc−1 a−1 cb−1 ab−1 c a−1 c−1 b


ba−1 c−1 b−1 a−1 c bc −1 a b−1 ca
cb−1 a−1 c−1 ab cab−1 c−1 ba−1

Proof. We will first show that Player 1 cannot win if Player 1 begins the game with b±1
or c±1 .
1. Without loss of generality,1 suppose Player 1 chooses b on their first turn. Then
Player 2 will also play b to move to b2 = an . Then by the move sequences shown
in Table 11.4 we see that Player 2 wins unless Player 1 chooses a.
Player 2’s strategy now is to continue playing a until Player 1 plays anything other
than a or moves to a2n−2 , which must happen because n is odd.

Table 11.4: The sequence of moves if Player 1 first plays b. Player 2 will mirror them and move to
b2 = an . Observe that Player 2 wins unless Player 1 chooses a.

Player 2 Moves If Player 1 Chooses Player 2 Moves First To


b a a
b 󳨀→ b2 = an 󳨀→ an+1 󳨀→ an+2
a−1 c −1
󳨀→ an−1 = b(ba−1 ) 󳨀→ b(ba−1 c−1 ) = be = b
b b
󳨀→ an b 󳨀→ an b2 = a2n = e
c n c
󳨀→ a c 󳨀→ an c2 = a2n = e
−1 −1
c c
󳨀→ an c−1 󳨀→ an c−2 = a2n = e

– If Player 1 moves to a2n−2 , then Player 2 will choose c−1 to move to a2n−2 c−1 =
an−1 b. By the calculations in Table 11.5 we may assume that Player 1 moves to
an−2 by choosing c−1 .
Since n is odd, Player 2 will continue to choose a−1 and be able to move to
a0 = e and win unless Player 1 chooses from {b±1 , c±1 }. Thus let us assume
that Player 1 chooses to play z ∈ {b±1 , c±1 } to move from aℓ to aℓ z for some
even ℓ such that 2 ≤ ℓ ≤ n − 3. Then Player 2 will play z to move from aℓ z to
aℓ z 2 = an+ℓ . Since every vertex am , n ≤ m ≤ 2n − 2, has been visited before,
Player 2 wins.

1 For the case where Player 1 chooses c, simply make the following changes: b changes to c, c to b,
and a±1 to a∓1 .
190 | Z. Gates and R. Kelvey

Table 11.5: The sequence of moves if Player 1 moves to a2n−2 . Note that all move sequences yield
Player 2 winning except the last row.

Player 2 Moves If Player 1 Chooses Player 2 Moves To


−1
c a b
a2n−2 󳨀→ an−1 b 󳨀→ (a2n−2 c−1 )a = a2n−2 (c −1 a) 󳨀→ a2n−2 (c −1 ab) = a2n−2 e = a2n−2
−1
a b
󳨀→ (an−1 b)a−1 = an b 󳨀→ an b2 = a2n = e
b a
󳨀→ an−1 b2 = a2n−1 󳨀→ a2n = e
b−1 a
󳨀→ (an−1 b)b−1 = an−1 󳨀→ an
c −1 a−1
󳨀→ (a2n−2 c−1 )c−1 = an−2 󳨀→ an−2 a−1 = an−3

Table 11.6: The top part shows the sequence of moves after the game proceeds through am for
n ≤ m ≤ k and Player 1 moves from an+k to an+k z for z ∈ {b±1 , c ±1 }. Player 2 wins unless Player 1
chooses c or c−1 , both of which move the game to ak . The bottom part shows the sequence of moves
following Player 1’s choice of cϵ for some ϵ ∈ {±1}. Note that the only option that does not immedi-
ately yield a Player 2 win is for Player 1 to choose a−1 .

Player 1 Moves Player 2 Chooses


n+k z n+k
a 󳨀→ a z
n+k c −1
z=b a b = an+k−1 (ab) 󳨀→ an+k−1 (abc −1 ) = an+k−1
c
z = b−1 an+k b−1 = an+k−1 (ab−1 ) 󳨀→ an+k−1 (abc) = an+k−1
c
z=c an+k c 󳨀→ an+k c2 = a2n+k = ak
−1
c
z = c−1 an+k c−1 󳨀→ an+k c−2 = a2n+k = ak
y
ak = an+k cϵ cϵ 󳨀→ ak y
b−ϵ
y=a (an+k cϵ cϵ )a 󳨀→ (an+k cα )(cϵ ab−ϵ ) = an+k c α
a−1
y = a−1 ak a−1 = ak−1 󳨀→ ak−1 a = ak−2
b
y=b ak b 󳨀→ (ak b)b = ak b2 = an+k
b−1
y = b−1 ak b−1 󳨀→ (ak b−1 )b−1 = ak b−2 = an+k

y = cϵ ak cϵ 󳨀→ (ak cϵ )cϵ = ak c 2ϵ = an+k

– Suppose that Player 1 plays z ∈ {b±1 , c±1 } to move from an+k to an+k z for some
even k such that 2 ≤ k ≤ n − 3. Because of the triangle relators (Table 11.3),
we can see from the top part of Table 11.6 that Player 2 will win unless z is c
or c−1 . Regardless, Player 2 will move the game to ak . From the bottom part of
Table 11.6 we see that the only possible choice for Player 1 is to choose a−1 to
move to ak−1 , with Player 2 also choosing a−1 on the next turn to move to ak−2 .
Player 2 will continue to choose a−1 until Player 1 chooses something other
than a−1 . Should Player 1 choose only a−1 , Player 2 will first reach ak−k = a0 =
e, since k is even. Hence, for some even ℓ, 2 ≤ ℓ ≤ k −2 < n−3, Player 1 chooses
Relator games on groups | 191

y ∈ {b±1 , c±1 } to move from aℓ to aℓ y, but since y2 = an , Player 2 will also choose
y to move from aℓ y to aℓ y2 = an+ℓ , which has been previously visited.
2. Having shown that Player 1 loses if they choose b or c to start, Player 1 will choose
from {a, a−1 } on their first turn. Without loss of generality, suppose Player 1
chooses a. Then Player 2 will choose a until Player 1 chooses something other
than a. Should Player 1 only choose a, Player 2 will first reach a2n = e because 2n
is even. Hence we assume that Player 1 chooses z in {b±1 , c±1 } to move from aℓ to
aℓ z for some even ℓ such that 2 ≤ ℓ ≤ 2n − 2. We have two cases depending on
whether ℓ > n or ℓ < n.
– Suppose n < ℓ ≤ 2n − 2. Let ℓ = n + k where 1 ≤ k ≤ n − 2. After Player 1 chooses
z ∈ {b±1 , c±1 } to move from an+k to an+k z, Player 2 will choose z to move from
an+k z to an+k z 2 = a2n+k = ak , since z 2 = an for all z ∈ {b±1 , c±1 }. The vertex ak
has previously been visited, and hence Player 2 wins.
– Suppose 2 ≤ ℓ < n. By Table 11.7 we see that only a choice of z = cϵ , ϵ ∈ {±1} is
possible for Player 1, hence moving from aℓ to aℓ cϵ .
If ℓ = n − 1, then Player 2 wins by choosing bϵ to move to an−1 (cϵ bϵ ) =
an−1 an+1 = e. Hence we assume that 2 ≤ ℓ ≤ n − 3. In this case, Player 2 will
subsequently move to an+ℓ . Continuing from the second half of Table 11.7,
we see that Player 1 must choose a−1 to move to an+ℓ−1 . Then Player 2 will
continue to choose a−1 until Player 1 moves to aℓ+2 , which happens because
ℓ is even and n is odd; or until Player 1 plays something other than a−1 . We
examine each case further in items (a) and (b), respectively.

Table 11.7: The lines of play if Player 1 chooses a on their first turn and Player 2 mirrors them until
Player 1 moves from aℓ to aℓ z for some even ℓ, 2 ≤ ℓ ≤ n − 3, and z ∈ {b±1 , c ±1 }. Note that Player 2
wins unless Player 1 chooses either c or c−1 . Following this further, the bottom part shows Player 1’s
options after Player 2 moves to an+ℓ above. The only option that does not result in a Player 1 loss
is a−1 .

Player 1 Moves Player 2 Chooses


z
a 󳨀→ a z
ℓ ℓ

c −1
z=b aℓ b = aℓ−1 (ab) 󳨀→ aℓ−1 (abc−1 ) = aℓ−1
c
z = b−1 aℓ b−1 = aℓ−1 (ab−1 ) 󳨀→ aℓ−1 (ab−1 c) = aℓ−1

z = cϵ aℓ cϵ 󳨀→ aℓ c2ϵ = an+ℓ
z
aℓ cϵ cϵ = an+ℓ 󳨀→ an+ℓ z

z = cϵ an+ℓ cϵ 󳨀→ an+ℓ c2ϵ = a2n+ℓ = aℓ

z = bϵ an+ℓ bϵ 󳨀→ an+ℓ b2ϵ = a2n+ℓ = aℓ
b−ϵ
z=a an+ℓ a 󳨀→ an+ℓ (ab−ϵ ) = aℓ cϵ (cϵ ab−ϵ ) = aℓ c ϵ
a−1
z = a−1 an+ℓ a−1 󳨀→ an+ℓ a−2 = an+ℓ−2
192 | Z. Gates and R. Kelvey

Table 11.8: If play proceeds from an+ℓ−2 as in Table 11.7 with only a−1 chosen by both players, then
Player 2 will choose b−1 from aℓ+2 . Note that the next move for Player 1 must be b−1 to move to an+ℓ+2
as all others will be a loss.

Player 2 Moves If Player 1 Chooses Player 2 Moves To


−1
b a c −1
aℓ+2 󳨀→ aℓ+2 b−1 󳨀→ (an+ℓ+2 b)a = an+ℓ+1 b 󳨀→ (an+ℓ+1 b)c −1 = an+ℓ (abc −1 ) = an+ℓ
a−1 c
aℓ+2 b−1 = an+ℓ+2 b 󳨀→ (aℓ+2 b−1 )a−1 = aℓ+2 (b−1 a−1 ) 󳨀→ aℓ+2 (b−1 a−1 c) = aℓ+2
−1
b a
󳨀→ (an+ℓ+2 b) b−1 = an+ℓ+2 󳨀→ an+ℓ+2 a = an+ℓ+3
c a
󳨀→ (a b )c = a
ℓ+2 −1 ℓ+2
(b c)
−1
󳨀→ aℓ+2 (b−1 ca) = aℓ+2
−1 −1
c a
󳨀→ (an+ℓ+2 b)c−1 = an+ℓ+2 (bb−1 a) = an+ℓ+1 󳨀→ an+ℓ

(a) Suppose play has continued from an+ℓ−1 with both players choosing only
a−1 until Player 1 moves from aℓ+3 to aℓ+3 a−1 = aℓ+2 . Then Player 2 will
play b−1 to move to aℓ+2 b−1 = an+ℓ+2 b. From Table 11.8 we see that any
choice other than b−1 for Player 1 leads to a loss; hence Player 1 moves
from an+ℓ+2 b to an+ℓ+2 .
In a manner similar to the earlier case (first subcase of (1) above), Player 2
will continue to choose a until Player 1 chooses something other than a.
If ℓ = n − 3, then Player 2 immediately wins by moving from an+ℓ+2 = a2n−1
to a2n = e. Thus we may assume that 2 ≤ ℓ ≤ n − 5. If Player 1 only chooses
a, then Player 2 will win at a2n = e since n + ℓ + 3 is even. Hence Player 1
must move from an+m to an+m y for some odd m, ℓ + 3 ≤ m ≤ n − 2, and
y ∈ {b±1 , c±1 }. Then Player 2 will play y to move to an+m y2 = a2n+m = am
since y2 = an for all y ∈ {b±1 , c±1 }. Since every vertex am , ℓ + 3 ≤ m ≤ n − 2,
has been previously visited, Player 2 wins.
(b) Suppose Player 1 plays z ∈ {b±1 , c±1 } to move from ak to ak z for some odd
k such that ℓ + 3 ≤ k ≤ n + ℓ − 2. We see from Table 11.9 that Player 2 wins
if z ∈ {c±1 }. If k ≥ n, then Player 2 wins if z ∈ {b±1 } as well since an+k has
already been visited in this case.
Now consider the case where k ≤ n − 2 and hence where an+k has not
been visited. By the second part of Table 11.9 we see that Player 2 wins
unless Player 1 chooses a. Player 2 will now choose a until Player 1 chooses
otherwise. If k = n − 2, then Player 2 wins by choosing a to move from
a2n−1 to a2n = e. Otherwise, since n + k + 2 is even, Player 2 will win at
a2n = e unless Player 1 chooses some y ∈ {b±1 , c±1 } to move from an+j to
an+j y where k + 2 ≤ j ≤ n − 2. In this case, Player 2 will mirror to move
to an+j y2 = an+j an = aj , which has already been visited. Thus Player 2
wins.
Relator games on groups | 193

Table 11.9: The sequence of moves if play proceeds from an+ℓ−2 as in Table 11.7 until Player 1 chooses
z ∈ {bϵ , cϵ }, ϵ ∈ {±1} to move from ak+1 a−1 = ak to ak z. Note that Player 1 must choose z = bϵ by the
relators in Table 11.3. Following this further, the bottom part shows Player 1’s options after Player 2
moves to an+k above. The only option that does not result in a Player 1 loss is a.

Player 1 Moves Player 2 Chooses


k+1 −1 k z k
a a = a 󳨀→ a z
ϵ bϵ
z=b ak bϵ 󳨀→ ak b2ϵ = an+k
b−ϵ
z = cϵ ak cϵ 󳨀→ (ak cϵ )b−ϵ = ak+1 (a−1 c ϵ b−ϵ ) = ak+1
y
ak bϵ bϵ = an+k 󳨀→ an+k y

y = bϵ an+k bϵ 󳨀→ an+k b2ϵ = ak
c
y=c an+k c 󳨀→ (an+k c)c = an+k c2 = a2n+k = ak
−1
c
y = c−1 an+k c−1 󳨀→ (an+k c−1 )c−1 = an+k c −2 = a2n+k = ak
a
y=a an+k a = an+k+1 󳨀→ (an+k+1 )a = an+k+2
c −ϵ
y = a−1 (ak bϵ bϵ )a−1 = ak (bϵ a−1 ) 󳨀→ (ak bϵ )(bϵ a−1 c−ϵ ) = ak bϵ

5 REL for products of cyclic groups


In this section, we consider the group ℤn × ℤm with the following presentation:

⟨a, b | an = bm = aba−1 b−1 = e⟩.

By Theorem 3.2 and the fact that the undirected Cayley graphs of (Dn , {r, s}) and
(ℤn ×ℤ2 , {(1, 0), (0, 1)}) are isomorphic for n ≥ 3, we know the winner of REL(ℤn ×ℤ2 ) for
n ≥ 3 by Theorem 2.7. Additionally, we know that Player 2 has a winning strategy for
REL(ℤ2 × ℤ2 ) since its undirected Cayley graph is equal to that of (ℤ4 , {1}). Additional
cases for REL(ℤn × ℤm ) are covered by the following theorem.

Theorem 5.1. Consider the game REL(ℤn × ℤm , {a, b}), where n ≥ m − 1 and n, m ≥ 3.
1. If n ≡ ±1 mod m, then Player 1 has a winning strategy.
2. If n ≡ 0 mod m, then Player 2 has a winning strategy.
3. If m = 4 and n ≡ 2 mod 4, then Player 2 has a winning strategy.

Proof. Let G ≅ ℤn × ℤm with the presentation above. We can write elements from G in
the normal form ai bj , where 0 ≤ i < n and 0 ≤ j < m. Note that the Cayley graph for
G can be visualized as an n-gon of m-gons (see Figure 11.10 for a partial Cayley graph
example).
The strategy is similar in all cases, so we will describe the strategy first in terms
of the players Winner and Loser as opposed to Player 1 and Player 2. The game begins
with Winner completing an initial word w ≡G g for some g ∈ G, where a−1 and b−1 do
194 | Z. Gates and R. Kelvey

not appear in w. Without loss of generality, we assume that a is played before a−1 and
b is played before b−1 . After this, Winner’s strategy is to play a if Loser plays b and to
play b if Loser plays a. Play will continue in this manner unless Loser plays a−1 or b−1 ,
which must occur from the vertex g(ab)ℓ for some ℓ. Note that by Winner’s strategy
the exponent of a is nondecreasing. We show that this strategy is a winning strategy
if Loser plays a−1 or b−1 and hence that we may assume that Loser plays only a and b.
Suppose that Loser plays a−1 . Then Winner could not have played a on the pre-
vious turn since backtracking is disallowed. Hence Winner played b on the previ-
ous turn. By Winner’s strategy this means that Loser must have played a the turn
prior. Then the three most recent vertices visited before Loser plays a−1 are, in order,
g(ab)ℓ−1 , g(aℓ bℓ−1 ), and g(ab)ℓ . Since the exponent of a is nondecreasing, Loser does
not win by playing a−1 to move to g(ab)ℓ a−1 = g(aℓ−1 bℓ ). Then Winner wins by com-
pleting a commutation relator and choosing b−1 to move to g(ab)ℓ−1 . The case where
Loser plays b−1 is similar. Thus we may assume that Loser only plays a and b, and
Winner therefore moves to g(ab)i for all i.
We now consider the following fives cases:
(1) If n = km + 1 for some k, then Player 1 initially plays a so that g = a and then exe-
cutes Winner’s strategy for km turns to win at g(ab)km = a(ab)km = akm+1 bkm = e.
(2) If n = km − 1 for some k, then Player 1 initially plays b so that g = b and executes
Winner’s strategy for km − 1 turns to win at g(ab)km−1 = b(ab)km−1 = akm−1 bkm = e.
(3) If n = km for some k, then Player 2 considers g = e, which they reach on the
0th turn. Then they execute Winner’s strategy for km turns to win at g(ab)km =
e(ab)km = akm bkm = e.
(4) If m = 4 and n = 4k + 2 for some k and Player 1 initially plays a, then Player 2 also
plays a to complete g = a2 and then executes Winner’s strategy for 4k turns to win
at g(ab)4k = a2 (ab)4k = a4k+2 b4k = e.
(5) If m = 4 and n = 4k + 2 for some k and Player 1 initially plays b, then Player 2
plays b to complete g = b2 and executes Winner’s strategy for 4k + 2 turns to win
at g(ab)4k+2 = b2 (ab)4k+2 = a4k+2 b4k+4 = e.

6 Three-player REL for dihedral groups


In this section, we examine an extension of the REL game for dihedral groups to three
players. With more than two players, we must address the issue of what a player does
when they no longer can win. To do so, we may establish a podium rule, or ranking sys-
tem, to define preferences for each player. This ranking is important because it gives
each player a preference for who wins. We will examine REL for dihedral groups with
two different podium rules. The first is the podium rule proposed by Li [11] to analyze
n-player Nim and studied further in the three-player case by Nowakowski, Santos, and
Relator games on groups | 195

b2 b
3
a a

a3 b2 a3 b ab2 ab
2
a

a2 b2 a2 b

Figure 11.10: A partial Cayley graph for ℤ4 × ℤ3 , a 4-gon of 3-gons. The colored edges give an ex-
ample of Player 1’s strategy as described in Theorem 5.1. The “red” edges denote Player 1 moves,
and the “blue” edges denote Player 2 moves. By choosing the generator opposite to Player 2’s
last move, Player 1 will always ensure victory. Note as well that, in this example, the game word is
aabbab ≡G a3 b3 = a3 , but Player 2 has not achieved a relator.

Silva [12]. The second is the podium rule utilized by Benesh and Gaetz [7] to analyze
q-player DNG, where q is prime.
First, we define the standard podium rule for REL with n players as used by Li and
Nowakowski, Santos, and Silva. In this rule the final player to make a legal move, i. e.,
the player to complete a relator, is the winner. The penultimate player to move finishes
runner-up, the player before that third, and so on. If a player cannot ensure victory for
themselves, then they will assist the player who ensures their highest possible ranking
to win the game.
For three players, this is a bit simpler. The player to complete a relator is the win-
ner. The previous player is second, and the next player to move is third and last. Be-
cause of this podium rule, note that if Player 1 cannot win, then Player 1 will help
Player 2 to win. If Player 2 cannot win, then Player 2 will help Player 3. Finally, if Player 3
cannot win, then Player 3 will help Player 1.

Theorem 6.1. For three-player REL(Dn , {r, s}) with the standard podium rule, Player 1 has
a winning strategy if n ≡ 0 mod 3 or n ≡ 1 mod 3, and Player 2 has a winning strategy
if n ≡ 2 mod 3.

Proof. We will first show that in most cases, a player will choose not to play s. Let
{A, B, C} = {1, 2, 3} such that Player A follows Player C, Player B follows Player A, and
196 | Z. Gates and R. Kelvey

Player C follows Player B. Suppose that the game begins with the word w, where w
does not end in s, and that, without loss of generality, Player C plays r to move to the
word wr. Suppose that Player A does choose s to move to the word wrs. We show that
Player A can finish neither first nor second from this point and hence would never
have chosen s from wr.
If Player A has a winning strategy from this point, then Player B would finish last.
Hence Player B will choose r to move to wrsr, from which Player C will choose s to move
to wrsrs ≡Dn w to win, thus securing a second-place finish for Player B. Thus Player A
cannot win from this situation. Suppose instead that Player A can finish second and
hence that Player B has a winning strategy from the position wrs. Clearly, it cannot
be by choosing r, since this leads to a Player C win. Thus Player B must choose r −1 to
move to wrsr −1 . If Player B has a winning strategy from this point, then Player C will
play s to move to wrsr −1 s, from which Player A will play r −1 to win at wrsr −1 sr −1 ≡Dn wr.
Thus Player B has no winning strategy.
Since Player A and Player B cannot have a winning strategy from wrs, it follows
that Player C has a winning strategy. However, this leads to a last place finish for
Player A, so Player A would never have chosen s.
Now we examine the cases where n ≡ 0, 1, 2 mod 3. First, suppose n ≡ 1 mod 3.
Player 1 has a winning strategy by always choosing r. After Player 1 chooses r on turn
1, we may consider w = e, so that Player 2 is in the situation described above. Hence
Player 2 will not choose s and must choose r. This continues until the game reaches
r n ≡Dn e. Since n ≡ 1 mod 3, Player 1 reaches r n and is thus the winner.
Now suppose n ≡ 0 mod 3. In this case, Player 1 begins by choosing s. Without
loss of generality, Player 2 chooses r to move to sr. If Player 3 chooses s, then Player 1
wins by choosing r to move to srsr ≡Dn e. If Player 3 chooses r, then for all following
turns, we are in the case where the game begins with a word w not ending in s followed
by a choice of r. Therefore all players will choose r until the game reaches sr n ≡Dn s.
Since n ≡ 0 mod 3, Player 1 is the player to reach sr n and hence the winner.
Finally, suppose n ≡ 2 mod 3. If Player 1 chooses s, then without loss of general-
ity, Player 2 will choose r to move to sr. Player 3 must choose r, and each subsequent
turn will result in a choice of r until the game reaches sr n ≡Dn s. Since n ≡ 2 mod 3
and this is a total of n + 1 moves, Player 3 reaches sr n and is the winner. Thus Player 1
finishes last if they choose s on turn 1 and will instead, without loss of generality,
choose r. Again by the argument above, we may assume that all players will play r on
subsequent turns until the game ends at r n ≡Dn e. Since n ≡ 2 mod 3, Player 2 reaches
r n and is the winner.

We now examine another podium rule for REL with n players as used by Benesh
and Gaetz [7]. The first player to complete a relator still wins the game. The ranking
then follows in the opposite manner of the standard podium rule, that is, the following
player is runner-up, the next player is third, and so on. We refer to this as the reverse
podium rule.
Relator games on groups | 197

In the case of three players, this means that if Player 1 cannot win, then Player 1
will help Player 3 to win. If Player 2 cannot win, then Player 2 will help Player 1. Finally,
if Player 3 cannot win, then Player 3 will help Player 2.

Remark 6.2. Note that Remark 3.4 still holds for dihedral groups. A player will never
prefer to be last; if Player m completes the third edge of a square, then Player m + 1
mod 3 wins, and hence Player m finishes last according to this podium rule. Since
finishing last is never preferable, no player will complete a third edge of a square if it
can be avoided. Due to this, Remark 3.5 still holds, that is, a choice of the generator s
forces the following two moves.

Theorem 6.3. For three-player REL(Dn , {r, s}) with the reverse podium rule, Player 1 has
a winning strategy if n is odd, and Player 3 has a winning strategy if n is even.

Proof. The key ingredient to this proof is Remark 6.2. As in the proof of Lemma 3.7, we
assume, without loss of generality, that players move to words equivalent to r i sj with
0 ≤ i ≤ n − 1, 0 ≤ j ≤ 1, where i is nondecreasing. Remark 6.2 then implies that a player
may play s to guarantee that the game moves from r i sj to r i+2 sj+1 with that same player
next to move.
Now suppose n = 2k + 1 for some k. Then by playing s k consecutive times Player 1
ensures that the game moves to r 2k sk with Player 1 moving next. Player 1 then moves
to r 2k+1 sk ∈ {e, s} to win since s has been visited.
Now suppose n = 2k for some k. We note that Player 1 can help Player 3 to win by
always playing s. After k times, the game arrives at r 2k sk = sk ∈ {0, s}, where Player 3
makes the last move to win since Player 2 must have moved from r 2k−2 sk to r 2k−1 sk by
Remark 6.2. This implies that Player 2 cannot have a winning strategy since Player 1
will always prefer Player 3 to win instead of Player 2. To conclude that Player 3 must
win, we now show that Player 1 does not have a winning strategy when n is even and
will therefore help Player 3 to win.
If Player 1 always selects s, then we have shown that Player 3 wins. Thus we may
assume that Player 1 plays r ±1 at some point after playing s ℓ consecutive times for
some 0 ≤ ℓ < k; that is, without loss of generality, the game begins with r 2ℓ+1 sℓ with
Player 2 moving next.
Suppose n ≡ 2 mod 4 or ℓ > 0. We show that Player 2 has a winning strategy.
They can play s k − ℓ − 1 consecutive times to move the game to

r (2ℓ+1)+2(k−ℓ−1) sℓ+(k−ℓ−1) = r 2k−1 sk−1 ,

where Player 2 moves next. Player 2 then will move to r 2k sk−1 = sk−1 ∈ {0, s}. If ℓ > 0,
then s has been previously visited, so Player 2 wins. If n ≡ 2 mod 4, then k ≡ 1 mod 2,
so k − 1 is even. Hence sk−1 = e, and Player 2 wins at e.
Now suppose n = 4m for some m and ℓ = 0. Then, without loss of generality,
Player 1 plays r on turn 1. We may assume that Player 2 plays s t consecutive times
198 | Z. Gates and R. Kelvey

for some 0 ≤ t < k, resulting in the game moving to r 2t+1 st with Player 2 next to
move. We look at two cases, namely that Player 2 either always plays s or eventually
plays r ±1 . If Player 2 always plays s, then t = k − 1 = 2m − 1, and the game begins with
r 2(2m−1)+1 s2m−1 = r 4m−1 s2m−1 = r n−1 s. From here Player 2 must move to s or to r n−1 . Since
ℓ = 0, s has not been visited, so then Player 3 moves to e to win in either case.
Finally, suppose that Player 2 eventually plays r ±1 , that is, t < k − 1, and the game
begins with r 2t+1 st with Player 2 then moving to r 2t+2 st . Player 3 moves next and has
a winning strategy by playing s 2m − t − 1 times to move to r (2t+2)+2(2m−t−1) st+(2m−t−1) =
r 4m s2m−1 = s, with Player 3 next to move. Since s has not been visited, Player 3 then
moves to e to win.

7 Open questions
– When first devising the games REL and RAV, we wanted to create a combinatorial
game that utilized the Cayley graph of a group. Although the Cayley graph is not
necessary in defining our relator games, we have found it useful when construct-
ing some of our proofs. To that end, we can study the make a cycle and avoid a
cycle games on general graphs. We expect these games to be more challenging
to study due to the absence of properties such as graph regularity and symmetry
inherent in Cayley graphs.
– A fundamental problem in combinatorial game theory for impartial games is to
compute the nim-number of a game (see [15]). These allow us to determine the
outcome of the game as well as of game sums. Although we have determined the
outcome of the games REL and RAV for several families of groups, we leave open
the problem of computing their nim-numbers.
– Another goal is to extend results on REL and RAV to n players. For REL on the di-
hedral groups, this becomes difficult after more than three players are involved,
since a player can only force moves two ahead and thus loses control over their
future moves. Note that the related game DNG for n-players was studied in [7].
– We can of course ask for the outcomes of REL and RAV on other families of finite
groups. Of specific interest are the generalized dihedral groups (see Example 3.11).
We have results for RAV for generalized dihedral groups via Theorem 3.9. For a
finite generalized dihedral group G ≅ H ⋊ ℤ2 , suppose we have a winning strategy
for REL(H, T). Can we then determine a winning strategy for REL(G, S) in a manner
similar to that of Theorem 3.9?
– In computing several game trees while working through examples, we observed
that most games of REL end after traversing at most half the vertices in the Cayley
graph. We have also observed that the game seems to be less complex when more
generators are involved. These lead to interesting questions from a computational
point of view. Can we find winning strategies using a minimal number of moves
Relator games on groups | 199

or find a correlation between sparseness of the Cayley graph and computational


complexity of the game?
– We can certainly explore the games REL and RAV via a computer program. The au-
thors have done some preliminary work on this with College of Wooster under-
graduates Minhwa Lee and Pavithra Brahmananda Reddy on this project.2 One
avenue is to apply machine learning techniques such as reinforcement learning
to create an artificial intelligence for different families of groups.
– Both authors have incorporated Cayley graphs into their abstract algebra courses.
Using the games of REL and RAV for Cayley graphs of dihedral groups and symmet-
ric groups is an alternative way of getting students to practice understanding the
structure of these groups. A structured and rigorous implementation of such an
approach is a possible direction for pedagogical research.

Bibliography
[1] R. Alvarado, M. Averett, B. Gaines, C. Jackson, M. L. Karker, M. A. Marciniak, F. Su and S.
Walker, The game of cycles, preprint.
[2] M. Anderson and F. Harary, Achievement and avoidance games for generating abelian groups,
Internat. J. Game Theory 16, No. 4 (1987), 321–325.
[3] B. J. Benesh, D. C. Ernst, and N. Sieben, Impartial avoidance and achievement games for
generating symmetric and alternating groups, Int. Electron. J. Algebra 20 (2016), 70–85.
[4] B. J. Benesh, D. C. Ernst, and N. Sieben, Impartial avoidance games for generating finite groups,
North-West. Eur. J. Math. 2 (2016), 83–103.
[5] B. J. Benesh, D. C. Ernst, and N. Sieben, Impartial achievement games for generating
generalized dihedral groups, Australas. J. Combin. 68 (2017), 371–384.
[6] B. J. Benesh, D. C. Ernst, and N. Sieben, Impartial achievement games for generating nilpotent
groups, J. Group Theory 22, No. 3 (2019), 515–527.
[7] B. J. Benesh and M. R. Gaetz, A q-player impartial avoidance game for generating finite groups,
Internat. J. Game Theory 47, No. 2 (2018), 451–461.
[8] D. C. Ernst and N. Sieben, Impartial achievement and avoidance games for generating finite
groups, Internat. J. Game Theory 47, No. 2 (2018), 509–542.
[9] P. Frankl, On a pursuit game on Cayley graphs, Combinatorica 7 (1987), 67–70.
[10] F. Lehner, Firefighting on trees and Cayley graphs, Australas. J. Combin. 75 (2019), 66–72.
[11] S.-Y. R. Li, N-person Nim and N-person Moore’s games, Internat. J. Game Theory 7 (1978),
31–36.
[12] R. Nowakowski, C. Santos, and A. Silva, Three-player nim with podium rule, Internat. J. Game
Theory January (2020), 1–11.
[13] R. Nowakowski and P. Winkler, Vertex-to-vertex pursuit in a graph, Discrete Math. 43, No. 2–3
(1983), 235–239.

2 mlee21@wooster.edu, pbrahmanandareddy22@wooster.edu. The second author would like to thank


the College of Wooster Sophomore Research Program for helping finance Minhwa Lee’s and Pavithra
Brahmananda Reddy’s work.
200 | Z. Gates and R. Kelvey

[14] A. Quilliot, Jeux et pointes fixes sur les graphes, Thèse de 3ème cycle, Université de Paris VI
(1978), 131–145.
[15] A. N. Siegel, Combinatorial Game Theory, Volume 146 of Graduate Studies in Mathematics,
American Mathematical Society, Providence, RI, 2013.
[16] F. Su, Mathematics for Human Flourishing, Yale University Press, New Haven, CT, 2020.
L. R. Haff
Playing Bynum’s game cautiously
Abstract: Several sequences of infinitesimals are introduced for the purpose of ana-
lyzing a restricted form of Bynum’s game or “Eatcake”. Two of these have terms with
uptimal values (à la Conway and Ryba, the 1970s). All others (eight) are specified by
“uptimal+ forms,” i. e., standard uptimals plus a fractional uptimal. The game itself is
played on an n × m grid of unit squares, and here we describe all followers (submatri-
ces) of the 12 × 12 grid. Positional values of larger grids become intractable. However,
an examination of n × n squares, 2 ≤ n ≤ 21, reveals that all but three of them are
equal to ∗, the exceptions being the 10 × 10, 14 × 14, and 18 × 18 cases. Nonetheless, the
exceptional cases have “star-like” characteristics: they are of the form ±(G), confused
with both zero and up, and less than double-up.

1 Introduction and summary


A version of Bynum’s game, also known as “Eatcake,” is examined. The notation, ter-
minology, and basic concepts of combinatorial game theory are assumed mostly with-
out further comment. From time-to-time, however, certain basics are underscored for
exposition. Winning Ways for your Mathematical Plays is highly recommended as an
introduction to combinatorial game theory. Other pertinent books include Lessons in
Play, An Introduction to Combinatorial Game Theory [1] and An Introduction to Combi-
natorial Game Theory [4]. At the graduate level, Combinatorial Game Theory [8] is a
standard text and reference.
Combinatorial game theory embodies two person games of perfect information.
Well known games of this kind include Chess, Checkers, and Go. Our players are
named Left and Right, and it is customary to think of Left as feminine and positive,
whereas Right is masculine and negative. The fundamental theorem of combinatorial
game theory states that “Either Left can force a win by playing first (on a game G)
or else Right can force a win by playing second, but not both” [8, p. 9]. Accordingly,
the following questions are of interest: “Which player can force a win?” and “What is
the winning move?” We assume the normal play convention according to which the
player who makes the last legal move wins the game.

Acknowledgement: In addition to correspondence with Neil A. McKay and the many helpful sugges-
tions provided by a referee, who examined two revisions, the author is grateful to Ruihan Zhuang, a
student assistant, who did computer calculations that verified and augmented those reported earlier
by the author.

L. R. Haff, Department of Mathematics, University of California, San Diego, California, USA, e-mail:
lhaff@ucsd.edu

https://doi.org/10.1515/9783110755411-012
202 | L. R. Haff

Figure 12.1: Top: Possible opening moves from the 6 × 8 starting position. Bottom: A possible move
by Left from the 1 × 8 position.

Eatcake is described and analyzed in On Numbers and Games [3, p. 199–202]; also, [2,
pp. 233–235]. It is an example of a dicotic game, which means that both players can
move from every nonempty subposition of the game. See [8, p. 60]. Every subposition
of a dicotic game is necessarily infinitesimal. In general, a game G is infinitesimal if
−ε < G < ε for every ε > 0.
The starting position for Bynum’s game is an n×m grid of unit squares where n ≥ 1
and m ≥ 1. Any such grid is called a “cake” (with 1 × 1 as a particular case). The rules
are now illustrated by referring to the 6 × 8 starting position shown in Figure 12.1.
If Left is First, then she is required to completely remove any column from the start-
ing position. Looking at Figure 12.1, she has removed the third column, thus splitting
it into separate cakes A and B. For the second move, Right must now choose either A
or B and remove any row. (Left always takes columns; Right always takes rows.) In this
case, Right chooses B and removes the fourth row. Now Left moves in either A, C, or
D, etc. The game ends when no cakes remain, and the winner is the player who eats
the last cake.
Any single row (or column) can appear as a position in Bynum’s game. For ex-
ample, at the bottom of Figure 12.1, we see that Left has moved in the 1 × 8 cake by
taking the fourth square. However, a move by Right (from the starting position) would
eliminate the entire row. This rather obvious position is pointed out only because in-
dividual rows or columns are eliminated as soon as they appear in our version of the
game. (They become zero positions.)
In the above, it is clear that Left had 4 distinct options to start with. She started
by eliminating the third column, but the same result would appear had she eaten the
sixth column instead. Similarly, had Right been First, his removal of the second row
(for example) would have the same meaning as the removal of the fifth row. Conse-
quently, the expression “third column” will refer to the third column from either the
left edge or the right edge. Similarly, “second row” will refer either to the second row
from the top or the second from the bottom, etc.
Our present version of Bynum’s game is played on an n × m grid with n ≥ 2 and
m ≥ 2. Moves are made exactly as in Bynum’s (original) game along with the following
modification: if a single row or column becomes isolated by a move, then it is treated as
Playing Bynum’s game cautiously | 203

a zero position (such leftovers become tainted!). In the above example, suppose Left
had used the first move to take the second column instead of the third. In this case
the first column becomes a zero-position due to its isolation. Likewise, in Figure 12.1,
consider Left’s options for the third move. If she chooses A, then both columns are
effectively removed from play. Finally, consider the n × 3 (or 3 × m) case. At some point
in the game, if Right opts to play in D, for example, then he can move in exactly two
different ways. He can eat the first row (as in Bynum’s game), or he can eat the second
row, in which case D completely disappears because the two remaining rows become
isolated.
We denote an n × m starting position by [n × m]. A follower of [n × m] is any game
position that can appear after a number of moves have been made, and such positions
consist of a number of cakes or components. Again, see Figure 12.1 for an example.
After two moves, the follower consists of components A, C, and D.
First, all values of [2 × m] for m ≥ 2 are uptimals. Values of this kind were intro-
duced by Conway and Ryba in the 1980s. This initial work was unpublished, but is
often cited in books and research articles. In this regard, see McKay [6, pp. 210–214]
and [8, p. 95]. In particular, McKay [6] extended the theory of uptimals and provided
computer code for computing their canonical forms.
In this paper, all values of [n × m] for 2 < n ≤ 12 and 2 < m ≤ 12 are expressed as
“uptimal+ forms”; that is, ordinary uptimals are increased (or decreased) by multiples
of ↑3/2 . Ten sequences of infinitesimals are defined for this purpose, and considerable
effort is devoted to establishing their properties. Notably, CGSuite output is translated
into relatively tractable terms via these sequences.
The followers of [n × n], 12 < n ≤ 21, become prohibitively complex. (The [18 ×
18] case, for example, requires 1,832 pages to print out.) Nevertheless, we are able to
describe the values of all square positions in this range. In this regard, we encounter
a peculiarity that we have been unable to explain: all [n × n] positions, 2 ≤ n ≤ 21, are
equal to ∗ except for n = 10, 14, and 18. However, these exceptional cases have “star-
like” characteristics. These are of the form ±(G), confused with both zero and up, and
less than double-up.

2 Game-theoretic values
The left and right options of [n × m], a position from Bynum’s (modified) game, are
given by

LeftOptions ([n × m]) = {[n × (m − i)] + [n × (i − 1)] for 1 ≤ i ≤ a} (12.1)

and

RightOptions ([n × m]) = {[(n − i) × m] + [(i − 1) × m] for 1 ≤ i ≤ b},


204 | L. R. Haff

where a and b are the numbers of left and right options, respectively,

a = ⌊(m + 1)/2⌋ and b = ⌊(n + 1)/2⌋.

Here [n × 0] = [n × 1] = [0 × m] = [1 × m] = 0 (and ⌊x⌋ is the greatest integer less than


or equal to x). Since [n × m] = −[m × n], the values are given only for n ≤ m.
Since the values of [2 × m], m ≥ 2, are uptimals, we begin with a brief review of
such terms. In addition, we briefly review fractional uptimals and atomic weight.

3 Uptimals, fractional uptimals, and atomic weight


3.1 The first instances
The uptimals are defined in terms of the basic units zero 0 = { | }, star ∗ = {0 | 0}, and
up ↑ = {0 | ∗}. Specifically, they are given by

↑[0] = 0, ↑[1] = ↑, . . . , ↑[n] = {↑[n−1] | ∗}, n ≥ 1;


0 1 n
↑ = 0, ↑ = ↑, . . . , ↑ = {0 | ↓ [n−1]
∗}, n ≥ 1.

The game ↑n is called “up-nth,” and the negatives of these games are “down,”

↓ = −↑ = {∗ | 0}, ↓[n] = −↑[n] = {∗ | ↓[n−1] }, and ↓n = −↑n = {↑[n−1] ∗ | 0}.

The following (traditional) notation appears throughout: for any game G, we write
G∗ = G + ∗, G↑ = G + ↑, etc.
Uptimals are the first of the dicotic games. (Recall that such games have an option
for both players at every position during the game.) An example of a game that is in-
finitesimal but not dicotic is {0 | {0 | −2}}. In this case, Right has an option in every
nonzero subposition, but Left has no option in −2 = { | −1}.
A number of properties are now stated for future reference. Again, see [6] for a tidy
presentation of proofs.

Lemma 3.1. The sequences ↑[n] and ↑n are positive; also, they are increasing and de-
creasing, respectively.

Lemma 3.2. As canonical forms, we have

↑[n] ∗ = {0, ↑[n−1] ∗ | 0} and ↑n ∗ = {0, ∗ | ↓[n−1] }.

Lemma 3.3. The games ↑[n] ∗ and ↑n ∗ are confused with 0 (i. e., they are fuzzy).

Lemma 3.4. The games ↑[p] and ↑[q] ∗ are confused for all nonnegative integers p and q.
Playing Bynum’s game cautiously | 205

Theorem 3.5. For each n ≥ 2, we have ↑[n] > m ⋅ ↑n for all positive integers m. (Thus we
say that “↑n is infinitesimal with respect to ↑[n] ”.)

Lemma 3.6. For n ≥ 0, we have ↑[n] − ↑[n−1] = ↑n or, equivalently,

↑[n] = ↑ + ↑2 + ⋅ ⋅ ⋅ + ↑n .

3.2 Certain generalizations


More generally, uptimals are given as follows.

Definition 3.7. An uptimal u is a game of the form

u = d0 ⋅ ∗ + d1 ⋅ ↑ + d2 ⋅ ↑2 + ⋅ ⋅ ⋅ + dn ⋅ ↑n ,

where d0 is either 0 or 1, and di ∈ ℤ for i > 0.

Following Conway and Ryba, the above expression is sometimes written as

u = .d1 d2 . . . dn ∗,

where the additive ∗ may or may not be present.


An example follows. Let G = ∗ + 2 ⋅ ↑ + 3 ⋅ ↑2 + ↑3 and H = −↑ − 4 ⋅ ↑2 + 3 ⋅ ↑3 . These
are represented by G = .231∗ and H = .143, where accents on the digits of H indicate
negation. As a sum of games, we merely add coefficients; e. g., G + H = .231∗ + .143 =
.114∗. It naturally follows that −(G + H) = −G − H = .114∗ etc.
Now fractional uptimals are briefly described. For G = {GL | GR } and H = {H L | H R },
their ordinal sum is defined by

G : H = {GL , G : H L | GR , G : H R }.

Among the first instances, we have

∗ : n = ↑[n] ∗, n = 0, 1, 2, . . . .

Correspondingly, for dyadic rational numbers x ≥ 0, the fractional uptimals are de-
fined by ↑[x] ∗ = (∗ : x) and ↑x+1 = {0 | ↓[x] ∗} along with ↓[x] = −↑[x] and ↓x+1 = −↑x+1 .
The particular cases ↑[1/2] = {0 | ∗, ↑} and ↑3/2 = {0 || 0, ↓∗ | 0, ∗} are of particular
importance in what follows. From Siegel [8, Exercise 4.24, p. 99] we see that

↑[1/2] + ↑3/2 = ↑. (12.2)


206 | L. R. Haff

Equation (12.2) motivates most of our present work. Indeed, ↑[1/2] = ↑ + ↓3/2 is the first
instance of an uptimal+ form. In general, these forms are given by

d0 ⋅ ∗ + d1 ⋅ ↑ + d2 ⋅ ↑2 + ⋅ ⋅ ⋅ + dn ⋅ ↑n + dn+1 ⋅ ↑3/2 (12.3)

(where di are integers). All positional values in our present game are written in these
terms.

3.3 Atomic weight


The atomic weight of a game, aw(G), is equal to the number of “ups” most closely
approximating G; so, in particular, aw(↑) = 1. See [2, p. 321] for the traditional presen-
tation of atomic weight. In what follows, we simply state these values as provided by
CGSuite.
We begin with the case aw(∗) = aw(↑r ) = 0 for all dyadic rational r > 1. Since
aw(↑3/2 ) = 0, it follows from equation (12.2) and the linearity of aw(⋅) that

aw(↑[1/2] ) = aw(↑ + ↑3/2 ) = 1.

Likewise, for any uptimal+ form, we have

aw(d0 ⋅ ∗ + d1 ⋅ ↑ + d2 ⋅ ↑2 + ⋅ ⋅ ⋅ + dn ⋅ ↑n + dn+1⋅ ⋅ ↑3/2 ) = d1 .

Atomic weight theory provides the following key result: for any infinitesimal G, it
follows that
a. aw(G) ≥ 2 implies G > 0,
b. aw(G) ≤ −2 implies G < 0; otherwise,
c. if −2 ≤ aw(G) ≤ 2, then the unconditional winner is undetermined.

Experts have noted, however, that sharper tools than atomic weight are needed for
extending the theory of all-small games. Here is a simple example: the distinction
between ↑[p] and ↑[q] , p ≠ q, can be important. However, that distinction is lost if
we consider atomic weights only. In particular, we have aw(↑[p] ) = aw(↑[q] ) = 1 for
p, q > 0.

3.4 Basic properties of ↑[1/2] and ↑3/2


Several inequalities involving ↑[1/2] and ↑3/2 will be needed in what follows. Although
the proofs are routine, they are included for completeness; also, the author has not
L R
encountered some of these. In what follows, the expression G 󳨀→A󳨀 → B means “Left
plays in G and chooses option A”; “Right plays in A and chooses B”, etc.
Playing Bynum’s game cautiously | 207

Lemma 3.8. It follows that


(a) ↑[1/2] > 0 and ↑3/2 > 0; also,
(b) ↑[1/2] ‖ ∗ and ↑3/2 ‖ ∗.

Proof. (a) Clearly, both ↑[1/2] ⊳ 0 and ↑[1/2] ≥ 0, and hence ↑[1/2] > 0. Likewise, it is
clear (for the same reasons) that ↑3/2 > 0. The proof of (b) is also immediate and thus
omitted.

The next two theorems are easily verified by CGSuite [7], and we omit their proofs.

Theorem 3.9. We have (a) ↑[1/2] ∗ = {0, ∗ | 0, ↑∗} and (b) ↑3/2 ∗ = {0, ∗ | ↓[1/2] }.

Theorem 3.10. The following inequalities hold: ↑ > ↑[1/2] > ↑3/2 > ↑2 > 0.

Theorem 3.11. We have


(a) ↑[1/2] > n ⋅ ↑3/2 for all n > 1 (i. e., ↑3/2 is infinitesimal with respect to ↑[1/2] ),
(b) ↑3/2 > n ⋅ ↑2 for all n > 1 (↑2 is infinitesimal with respect to ↑3/2 ).

Proof. We prove only (a), as the proof of (b) is similar. The case n = 1 was proven
above. Hence we set 𝒟n = ↑[1/2] + n ⋅ ↓3/2 . Suppose that Left is First in 𝒟n . Then she will
choose ↑[1/2] + (n − 1) ⋅ ↓3/2 + ↑[1/2] + ∗, a game of atomic weight 2, and thus 𝒟n ⊳ 0.
If Right is First in 𝒟n , then his three options are (i) ∗ + n ⋅ ↓3/2 , (ii) ↑ + n ⋅ ↓3/2 , and
(iii) ↑[1/2] + (n − 1) ⋅ ↓3/2 . It follows that (i) || (ii) and (ii) = (iii). Accordingly, we will
L
→ ∗ + (n − 1) ⋅ ↓3/2 + ↑[1/2] ∗ = (n − 1) ⋅ ↓3/2 + ↑[1/2] > 0 (by induction) and
have (i) 󳨀
L
→ ↑ + (n − 1) ⋅ ↓3/2 + ↑[1/2] ∗ > 0 (a game of atomic weight 2). We conclude that 𝒟n ≥ 0
(ii) 󳨀
and hence ↑[1/2] > n ⋅ ↑3/2 for all n > 1.

4 Values of [n × m] for 2 ≤ n ≤ 12 and 2 ≤ m ≤ 12


4.1 The [2 × m] positions, m ≥ 2
Now we simplify the notation by setting Tm = [2 × m] for m ≥ 2 and T0 = T1 = 0. (Recall
that the single right option of Tm is 0.) By equation (12.1) the first six values of Tm are
as follows:
– T2 = ∗ since {T1 | 0} = {0 | 0}.
– T3 = ↑[1] ∗ since {T2 , T1 + T1 |0} = {∗, 0|0} = ↑∗ = ↑[1] ∗.
– T4 = ↓2 since {T3 , T2 | 0} = {↑[1] ∗, ∗ | 0} = {↑[1] ∗ | 0}. Here ↑[1] ∗ dominates ∗ as a
left option. Whereas T4 < 0, all other Tn are seen to be fuzzy.
– T5 = ↑[2] ∗ since {T4 , T3 + T1 , T2 + T2 } = {↓2 , ↑[1] ∗, 0|0} = {↑[1] ∗, 0|0} = ↑[2] ∗. (In this
case, 0 dominates ↓2 as a left option.)
– T6 = ↑[3] ∗, i. e., {T5 , T4 + T1 , T3 + T2 |0} = {↑[2] ∗, ↓2 , ↑ |0} = {↑[2] ∗, ↑ |0}.
208 | L. R. Haff

(↑ dominates ↓2 as a left option, whereas ↑[2] ∗ and ↑ are confused; see Lemma 3.4.)
The last equation follows since ↑ reverses to 0, and thus {0, ↑[2] ∗ | 0} = ↑[3] .
We can likewise verify that Tm = ↑[m−3] ∗ for 7 ≤ m ≤ 10. Although the details are
muddied a bit by a quirky term T4 = ↓2 , they pose no difficulties. For m > 10, the terms
“smooth out,” and we get the following result.

Theorem 4.1. For m > 10, we have Tm = [2 × m] = ↑[m−3] ∗.

Proof. We proceed by induction: for fixed m > 10 and all n such that 10 < n < m, we
assume that Tn = ↑[n−3] ∗. The proof that follows is for the case m = 2k + 1, k ≥ 5. That
for the even case is entirely similar (hence omitted). There are k + 1 left options of Tm :
(1) Tm−1 = ↑[m−4] ∗,
(2) Tm−2 = ↑[m−5] ∗,
(3) Tm−3 + T2 = ↑[m−6] ,
(4) Tm−4 + T3 = ↑[m−7] + ↑,
(5) Tm−5 + T4 = ↑[m−8] ∗ + ↓2 ,
(6) Tm−6 + T5 = ↑[m−9] + ↑[2] ,
(7) Tm−7 + T6 = ↑[m−10] + ↑[3] ,
..
.
(k + 1) Tm−k−1 + Tk = ↑[m−k−4] + ↑[k−3] .

First, options (2), (3), (4), and (5) are dominated as left options. In particular, the
following are true.
– Option (2) is dominated by (1) (see Lemma 3.1).
– Option (3) is dominated by (4), i. e., we have ↑[m−7] + ↑ > ↑[m−6] or, equivalently,
↑1 > ↑m−6 (recall that ↑n is a decreasing sequence).
– Both (4) and (5) are dominated by (6). This follows since

(4) ↑[m−7] + ↑ = .211 . . . 1m−7 , (5) ↑[m−8] + ↓2 = .101 . . . 1m−8 , and


(6)↑[m−9] + ↑[2] = .221 . . . 1m−9 .

Now we dispatch with the remaining dominated options. The values of (6) through
(k + 1) (inclusive) are all of the form ↑[m−p] + ↑[q] , where p − q = 7 and p = 9, . . . , k + 1.
Now we claim that option (k + 1) dominates (6) through (k), inclusive. In this regard,
we rewrite option (k + 1) as

Tm−k−1 + Tk = Tk + Tk = ↑[k−3] + ↑[k−3] = .22 ⋅ ⋅ ⋅ 2k−3 ,

and the jth predecessor of option (k + 1) as

Tk+j + Tk−j = ↑[(k−3)+j] + ↑[(k−3)−j] = .22 ⋅ ⋅ ⋅ 2(k−3)−j 11 ⋅ ⋅ ⋅ 1(k−j)+3 ,


Playing Bynum’s game cautiously | 209

from which the dominance follows. So far we have established that

T2k+1 = {↑[2k−3] ∗, ↑[k−3] + ↑[k−3] | 0}.

Now we claim that option (k + 1) reverses to 0. First, saying that options (1) and
(k + 1) are confused is equivalent to saying that

“↑[2k−3] + ↓[k−3] + ↓[k−3] is confused with ∗”.

This follows since ↑[2k−3] + ↓[k−3] + ↓[k−3] = 11 ⋅ ⋅ ⋅ 1k−3 11 ⋅ ⋅ ⋅ 12k−3 , and a result from Siegel
[8, p. 96] confirms that the sum is confused with ∗.
Finally, we show that ↑[k−3] + ↑[k−3] reverses to 0. If Left chooses this option, then
Right can only respond with ↑[k−3] ∗ = {0, ↑[k−4] ∗ | 0}. Hence we must show that 𝒟 =
T2k+1 + ↓[k−3] ∗ ≥ 0. If Right is First in 𝒟, then he has three options:

(i) 0 + ↓[k−3] ∗, (ii) T2k+1 + 0, and (iii) T2k+1 + ↓[k−4] ∗.

However, Left has winning moves in each case. In (i), she plays in ↓[k−3] ∗ and
chooses 0; in (ii), she chooses ↑[k−3] + ↑[k−3] > 0; and in (iii), she plays in T2k+1 ,
chooses ↑[2k−3] ∗, and gets the overall result ↑[2k−3] ∗ + ↓[k−4] ∗ = ↑[2k−3] + ↓[k−4] > 0. This
completes the proof.

Proceeding with 3 ≤ n ≤ 12 and 3 ≤ m ≤ 12, several sequences of infinitesimals


are defined. Their properties are described, positional values are written in terms of
the sequences, and a couple of examples are given. First, two preliminaries:

Definition 4.2. A sequence of games Gn is linear if Gn+1 − Gn is constant, that is, the
terms are independent of n for n ≥ 1. Otherwise, such a sequence is nonlinear.

Definition 4.3. A sequence of infinitesimals Gn is virtually increasing if Gn < Gn+1 ∗ or,


equivalently, Gn ∗ < Gn+1 for n ≥ 1. (Virtually decreasing is similarly defined.)

4.2 Linear sequences of extended uptimal forms


Four sequences follow (α, β, γ a , and γ b ), all of which are virtually increasing. In each
case, the first differences are equal to ↑∗.

Definition 4.4. The alpha-sequence is given by

α1 = ↑[1/2] and αn = {0 | αn−1 } for n ≥ 2.

Definition 4.5. The beta-sequence is given by

β1 = ↑3/2 and βn = {0 | βn−1 } for n ≥ 2.


210 | L. R. Haff

Let us rewrite equation (12.2) as α1 = ↑ − ↑3/2 . (As mentioned earlier, this is the first
example of an uptimal+ form.) Recall that ↑3/2 is infinitesimal with respect to α1 .

Lemma 4.6. We have αn > 0 and βn > 0 for all n ≥ 1.

Proof. The case n = 1 appeared in Lemma 3.8. For αn = {0 | αn−1 }, again αn ⊳ 0. We


have αn−1 > 0 by induction; hence αn ≥ 0, and, consequently, αn > 0. The details for
the beta sequence are similar (hence omitted).

Lemma 4.7. Further comparisons follow for the alpha and beta sequences:
(a) for n ≥ 1, we have αn > βn ;
(b) α1 || ∗ and αn > ∗ for n > 1;
(c) α1 < ↑, α2 || ↑, and αn > ↑ for n ≥ 3;
(d) αn+1 || αn for n ≥ 1.

Moreover, statements (b), (c), and (d) hold for the beta sequence as well.

Proof. (a) For n = 1, the inequality follows from equation (12.2) and Lemma 4.6. Sup-
pose n > 1 and set 𝒟 = αn − βn = {0 | αn−1 } + {−βn−1 | 0}. If Left is First in 𝒟, then
L
𝒟 󳨀→ αn − βn−1 , and Right can reply with either αn−1 − βn−1 > 0 (by induction) or
αn − 0 > 0. Thus 𝒟 ⊳ 0. Otherwise, if Right starts, then either

R L R
𝒟󳨀 → αn−1 − βn−1 > 0
→ αn−1 − βn 󳨀 or 𝒟 󳨀
→ αn + 0 > 0,

and hence 𝒟 ≥ 0. This proves that αn > βn for n ≥ 1.


(b) Recall that α1 ‖ ∗ (from Lemma 3.8). Now consider αn > ∗ for n ≥ 2. Here we
need to show that 𝒟n = αn + ∗ = {0 | αn−1 } + {0 | 0} > 0. It is clear that 𝒟n ⊳ 0. Also,
R R
𝒟n ≥ 0 since either 𝒟n 󳨀 → αn > 0. This proves the
→ αn−1 + ∗ > 0 (by induction) or 𝒟n 󳨀
inequality.
(c) Again, the first inequality follows from equation (12.2). For the second one, set
𝒟2 = α2 + ↓. Left’s winning move in 𝒟2 is α2 + ∗ > 0. Otherwise, if Right is First, then his
winning move is to α1 + ↓ < 0 (from Lemma 4.7(c)), and hence α2 || ↑. The inequality
αn > ↑ for n ≥ 3 follows by a simple induction.
(d) Set 𝒟n = αn − αn+1 = {0 | αn−1 } + {−αn | 0}. In particular, for n = 2, 𝒟2 = α1 − α2 =
L
{0 | ∗, ↑} + {∗, ↓|0| | 0}. We have 𝒟2 ⊳ 0 because 𝒟2 󳨀
→ α1 − α1 = 0. Also, 𝒟2 ⊲ 0 since
R
→ ∗ − α2 < 0. Thus we have α2 || α1 .
𝒟2 󳨀
R
In the general case, again, we have 𝒟n ⊳ 0. Now 𝒟n ⊲ 0 since 𝒟n 󳨀→ αn−1 − αn+1
from which Left can respond with 0 − αn+1 < 0 or αn−1 − αn . The latter is fuzzy by
induction. In the above proof, αn−1 − αn ⊲ 0, and so αn || αn+1 .

The proofs of (b), (c), and (d) for the beta sequence are omitted.

Theorem 4.8. The sequences αn and βn are virtually increasing.


Playing Bynum’s game cautiously | 211

Proof. Setting n = 1, we show that α2 ∗ > α1 or 𝒟1 = α2 − α1 + ∗ > 0. We have 𝒟1 ⊳ 0


L
because 𝒟1 󳨀
→ α2 + ∗ + ∗ > 0 (Lemma 4.6). Also, we have 𝒟1 ≥ 0 because either
R L R
→ α1 − α1 + ∗ 󳨀
𝒟1 󳨀 → α1 − α1 = 0 or 𝒟1 󳨀
→ α2 + 0 + ∗ > 0 (by Lemma 4.7b)
R L
or → α2 − α1 + 0 󳨀
𝒟1 󳨀 → α2 + ∗ > 0.

Thus α2 ∗ > α1 . For n ≥ 2, set 𝒟n = αn+1 ∗ −αn = {0 | αn } + {−αn−1 | 0} + ∗. Left wins 𝒟n


by choosing αn+1 − αn−1 + ∗ because Right’s three options from the latter are

αn ∗ − αn−1 > 0 (by induction), αn+1 ∗ > 0 and αn+1 − αn−1 .

Left will respond to the latter by playing in −αn−1 , and an easy induction will show
that the latter is positive. Consequently, 𝒟n ⊳ 0. Also, we claim that 𝒟n ≥ 0 because
either
R L R R
→ (αn ∗) − αn 󳨀
𝒟n 󳨀 →0 or 𝒟n 󳨀
→ αn+1 ∗ > 0 or 𝒟n 󳨀
→ αn+1 − αn .

In the latter position, Left can win by playing in −αn . It is left for the reader to provide
the details. Thus we have αn+1 ∗ > αn .

Lemma 4.9. For n ≥ 2, αn ∗ = {0 | αn−1 ∗} and βn ∗ = {0 | βn−1 ∗}.

Proof. For n = 1, see Theorem 3.9. From definition, αn + ∗ = {∗, αn | αn−1 ∗, αn }. Left’s
option ∗ is dominated by αn (from Lemma 4.7(b)). Also, αn reverses to 0 since
L R
αn ∗ 󳨀 → αn−1
→ αn 󳨀 and αn−1 < αn ∗ (from Theorem 4.8).

Lastly, αn is dominated by αn−1 ∗ as a right option. The proof for βn ∗ is omitted.

Theorem 4.10. The successive differences follow. We have αn −αn−1 = ↑∗ and βn −βn−1 =
↑∗, n ≥ 2.

Proof. We will consider the alpha case only. For n = 2, set

𝒟2 = α2 − α1 + ↓∗ = {0 | α1 } + {∗, ↓ | 0} + {0 | 0, ∗}.

Suppose Left is First in 𝒟2 . Then she has four options:

(i) 0 − α1 + ↓∗, (ii) α2 + ∗ + ↓∗, (iii) α2 + ↓ + ↓∗, and (iv) α2 − α1 + 0.

One readily shows that (i), (ii), and (iii) are all dominated by (iv) as a left option. More-
R
over, Right’s response to (iv) is immediate, that is, (iv) 󳨀 → α1 − α1 + 0 = 0. We conclude
that 𝒟2 ≤ 0.
If Right is First in 𝒟2 , then his options are (i) α1 − α1 + ↓∗, (ii) α2 + 0 + ↓∗, (iii)
α2 − α1 + 0, and (iv) α2 − α1 + ∗. In this case, (ii), (iii), and (iv) are all dominated as right
212 | L. R. Haff

L
options by (i), so we need only consider (i) 󳨀
→ 0 (where Left plays in ↓∗). Consequently,
𝒟2 ≥ 0, and we conclude that 𝒟2 = 0.
For n > 2, we set 𝒟n = αn − αn−1 + ↓∗ = {0 | αn−1 } + {−αn−2 | 0} + {0 | 0, ∗}. Here
Left’s options are

(i) 0 − αn−1 + ↓∗, (ii) αn − αn−2 + ↓∗, and (iii) αn − αn−1 + 0.

Now it happens that (iii) dominates both (i) and (ii) as left options. In the first case,
(iii) − (i) = αn + ↑∗ has two Right options, but Left wins both, that is,
L
αn−1 + ↑∗ 󳨀
→ αn−1 + 0 > 0 and αn + 0 > 0.

Additionally, (iii) is equal to (ii) by induction. Thus we need only consider (iii). Since
R
(iii) 󳨀
→ αn−1 − αn−1 = 0, it follows that 𝒟n ≤ 0.
If Right is First in 𝒟n , then his options are given by

(i) ↓ ∗ (a play in αn ), (ii) αn + ↓ ∗ (a play in − αn−1 ),


(iii) αn − αn−1 (a play in ↓∗), and (iv) αn − αn−1 + ∗ (a play in ↓∗).

It is left for the reader to show that option (i) dominates each of the other three as a
L
right option. That done, we will have (i) 󳨀
→ 0 and, consequently, 𝒟n ≥ 0. We have
shown that 𝒟n = 0.

Theorem 4.11. For integers n > m > 0,

(n − m) ⋅ ↑ + ∗ if n and m are of opposite parity,


αn − αm = {
(n − m) ⋅ ↑ if n and m are of same parity.

Proof. From Theorem 4.10,


n
∑(αi − αi−1 ) = (n − 1) ⋅ (↑∗),
i=2

and
n m n
∑(αi − αi−1 ) = ∑(αi − αi−1 ) + ∑ (αi − αi−1 )
i=2 i=2 i=m+1

= (m − 1) ⋅ (↑∗) + (αn − αm ),

from which the result follows.

Neil A. McKay responded to an earlier version of this work by suggesting the fol-
lowing equation.
(n − 1) ⋅ ↑ + α1 if n is odd,
Corollary 4.12. It follows that αn = {
(n − 1) ⋅ ↑ + α1 ∗ if n is even.
Playing Bynum’s game cautiously | 213

Proof. In Theorem 4.11, set m = 1.

Applying α1 = ↑ − ↑3/2 to Corollary 4.12, we obtain the following:

Corollary 4.13. The Uptimal+ Form is given by

n ⋅ ↑ − ↑3/2 if n is odd,
αn = { 3/2
∗+ n⋅↑−↑ if n is even.

We mentioned that aw(↑3/2 ) = 0, so it follows that aw(αn ) = n.

Corollary 4.14. It follows that α1 < ↑ < α2 ∗ < 2 ⋅ ↑ < α3 < 3 ⋅ ↑ ⋅ ⋅ ⋅ .

Proof. Let n be an odd positive integer. From Corollary 4.13 we have αn = n⋅↑−↑3/2 < n⋅↑
(since ↑3/2 > 0). If n is even, then

αn = ∗ + n ⋅ ↑ − ↑3/2 = ∗ + (n − 1) ⋅ ↑ + (↑ − ↑3/2 ) > ∗ + (n − 1) ⋅ ↑

since ↑ − ↑3/2 > 0.

Exactly the same inequalities (as in Corollary 4.14) hold for the β-sequence. More-
over, the same reasoning that led to Corollary 4.13 can be used to provide a similar
statement for the β-sequence (the details are omitted).

Proposition 4.15. The Uptimal+ Form is given by

(n − 1) ⋅ ↑ + ↑3/2 if n is odd,
βn = {
∗ + (n − 1) ⋅ ↑ + ↑3/2 if n is even.

Here we have aw(βn ) = n − 1.

Definition 4.16. The gamma-a sequence is given by

γ1a = {0 | α1 , α2 } and γna = {0 | γn−1


a
} for n ≥ 2.

Lemma 4.17. It follows that


(a) γna > 0 and γna > ∗ for n ≥ 1,
(b) γ1a || ↑ while γna > ↑ for n ≥ 2,
(c) γna ∗ > αn , n ≥ 1.

Proof. (a) The first inequality is obvious. For the latter, set 𝒟n = γna + ∗. For n = 1,
L
→ γ1a + 0 > 0. But if Right starts, then
we have 𝒟1 󳨀

R L R L R
→ α1 + ∗ 󳨀
[𝒟1 󳨀 → α1 > 0] or → α2 + ∗ 󳨀
[𝒟1 󳨀 → α2 > 0] or → γ1a > 0];
[𝒟1 󳨀
214 | L. R. Haff

R
consequently, γ1a > ∗. For n > 1, 𝒟n 󳨀
→ γna > 0. Otherwise, we have

R a R
→ γn−1 + ∗ > 0 (by induction)
𝒟n 󳨀 → γna > 0.
or 𝒟n 󳨀

This shows that γna > ∗ for all n ≥ 1.


L
(b) Set 𝒟n = γna + ↓. For the first relation, we have 𝒟1 󳨀
→ γ1a + ∗ > 0 (by part (a)) and
R
→ α1 + ↓ < 0 (since α1 + ↑3/2 = ↑); consequently, γ1a || ↑. For the second inequality,
𝒟1 󳨀
L
we first consider 𝒟2 = γ2a + ↓. In this case, Left’s winning move is 𝒟2 󳨀
→ γ2a + ∗ > 0. If
R L R
→ γ1a + ↓ 󳨀
Right is First in 𝒟2 , then either 𝒟2 󳨀 → γ2a + 0 > 0. This
→ γ1a + ∗ > 0 or 𝒟2 󳨀
a a
shows that γ2 > ↑. Finally, we show that γn > ↑ for all n ≥ 2. Similarly, Left’s winning
L
→ γna + ∗ > 0. Otherwise,
move is 𝒟n 󳨀

R a R
→ γn−1 + ↓ > 0 (by induction)
𝒟n 󳨀 → γna + 0 > 0.
or 𝒟n 󳨀

We conclude that γna > ↑ for all n ≥ 2.


(c) For n ≥ 1, we prove that 𝒟n > 0 where

a a
𝒟n = γn − αn + ∗ = {0 | γn−1 } + {−αn−1 | 0} + {0 | 0}.

First, we claim that 𝒟1 ⊳ 0 where 𝒟1 = {0 | α1 , α2 }+{∗, ↓ | 0}+{0 | 0}. If Left is First, then
she will choose ∗ in −α1 , and the result is γ1a > 0 (from part (a)). Now suppose Right is
L
First. Then his options are (i) α1 − α1 + ∗ 󳨀
→ 0, (ii) α2 − α1 + ∗ > 0 (from Theorem 4.8),
L
(iii) γna + 0 + ∗ > 0 (from part (a)), and (iv) γ1a − α1 + 0 󳨀
→ γ1a + ∗ > 0. Hence 𝒟1 ≥ 0, and
a
the foregoing shows that γ1 ∗ > α1 .
If Left is First in 𝒟n (n ≥ 2), then she will choose γna − αn−1 + ∗, from which Right
has three options:

a
(i) γn−1 − αn−1 + ∗ > 0 (by induction), (ii) γna + 0 + ∗ > 0 (from part (a)), and
(iii) γna − αn .

Left will respond to (iii) with γna − αn−1 . By induction it follows that

R L
γna − αn−1 󳨀 a
→ γn−1 a
− αn−1 = (γn−1 ∗) − αn−1 + ∗ 󳨀 a
→ (γn−1 ∗) − αn−1 > 0 or
R
γna − αn−1 󳨀
→ γna + 0 > 0.

This shows that 𝒟n ⊳ 0. Now let Right be First. In this case,

R a L a
→ γn−1 − αn + ∗ 󳨀
𝒟n 󳨀 → γn−1 − αn−1 + ∗ > 0 (by induction), or
R a
→ γn + 0 + ∗ > 0 (by part (a)),
𝒟n 󳨀 or
Playing Bynum’s game cautiously | 215

R a L a
→ γn − αn + 0 󳨀
𝒟n 󳨀 → γn − αn−1 .

It was previously seen that the last position is a second player win. We now have
𝒟n ≥ 0 and thus 𝒟n > 0. We conclude that γna ∗ > αn .

Theorem 4.18. The sequence γna is virtually increasing.

Proof. Set 𝒟n = γna − γn−1a a


+ ∗ = {0 | γn−1 a
} + {−γn−2 | 0} + {0 | 0}. For γ2a > γ1a ∗, we have
𝒟2 = γ2a − γ1a + ∗ = {0 | γ1a } + {−α1 , −α2 | 0} + {0 | 0}. If Left starts in 𝒟2 , then her winning
move is to γ2a − α1 + ∗ because Right can only respond to the latter with

(i) γ1a − α1 + ∗ or (ii) γ2a + 0 + ∗ or (iii) γ2a − α1 + 0.

But (i) and (ii) are positive (Lemma 4.17(a, c)). Also, (iii) is positive since γ2a − α1 >
(α2 ∗) − α1 > 0 by Lemma 4.17(c) and the fact that αn is virtually increasing. It follows
that 𝒟2 ⊳ 0. Now suppose Right is First in 𝒟2 . Then his options are

(i) γ1a − γ1a + ∗, (ii) γ2a + 0 + ∗, and (iii) γ2a − γ1a + 0.

Left clearly wins (i), and from Lemma 4.17(a) it follows that (iii) dominates (ii) as a Right
option. It is left for the reader to show that Left can win (iii). This done, we will have
𝒟2 ≥ 0 and, consequently, γ2a > γ1a ∗.
Now we claim that γna > γn−1 a
∗ for n ≥ 3. If Left is First in 𝒟n , then her winning
a a
move is to γn − γn−2 + ∗ since Right’s options from the latter are

a
(i) γn−1 −γ an−2 + ∗, (ii) γna + 0 + ∗, and (iii) γna −γ an−2 + 0,

none of which will provide him with a winning move. Again, (iii) dominates (ii) as a
right option. Left wins (i) by induction, and following (iii), Left will respond with

a a a a a a
γn−1 − γn−3 = (γn−1 + γn−2 ∗) + (γn−2 ∗ −γn−3 )>0 (again by induction).

Thus we have 𝒟n ⊳ 0. Now let Right move first in 𝒟n . Then he has three options:

R a a R R
(i) 𝒟n 󳨀
→ γn−1 + γn−1 + ∗, → γna + 0 + ∗,
(ii) 𝒟n 󳨀 and → γna + γn−1
(iii) 𝒟n 󳨀 a
+ 0.

Here (ii) dominates (iii) as a right option, option (i) is fuzzy, and option (ii) is positive
by Lemma 4.17(a). Hence 𝒟n ≥ 0, and we conclude that γna > γn−1 a
∗ (n ≥ 3).

Lemma 4.19. We have γ1a ∗ = {0 | α1 ∗, α2 ∗} and, for n ≥ 2, γna ∗ = {0 | γn−1


a
∗}.

Proof. By definition, γ1a + ∗ = {∗, γ1a | γ1a , α1 ∗, α2 ∗}. First, γ1a is dominated by α1 ∗ as a
right option (by Lemma 4.17(c)). It is also the case that the left options ∗ and γ1a reverse
to 0 (the details are omitted).
216 | L. R. Haff

Now let n ≥ 2, so that γna + ∗ = {∗, γna | γn−1


a
∗, γna }. In this case, γn−1
a
∗ dominates γna
a a
as a right option (since γn is virtually increasing). Also, γn dominates ∗ as a left option.
L R L
Finally, γna reverses to 0 since (γna + ∗) 󳨀
→ γna 󳨀 a
→ γn−1 → 0 and γna ∗ > γn−1
󳨀 a
.
The lemma that follows is stated without proof.

Lemma 4.20. The first term of γna is given by

γ1a = 2 ⋅ α1 + ∗ = 2 ⋅ ↑ + 2 ⋅ ↓3/2 + ∗.

Theorem 4.21. For n ≥ 2, we have γna − γn−1


a
= ↑∗.

Proof. Set 𝒟n = γna − γn−1


a a
+ ↓∗ = {0 | γn−1 a
} + {−γn−2 | 0} + {0 | 0, ∗}.
Suppose Left moves first in

a a a
𝒟2 = γ2 − γ1 + ↓∗ = {0 | γ1 } + {−α1 , −α2 | 0} + {0 | 0, ∗}.

Then she has four options:

(i) 0 − γ1a + ↓∗, (ii) γ2a − α1 + ↓∗, (iii) γ2a − α2 + ↓∗, and (iv) γ2a − γ1a + 0.

However, option (iv) dominates the first three of these as a left option.
To begin with, (iv) − (i) = γ2a + ↑∗ ≥ 0 because

R L R
(γ2a + ↑∗) 󳨀
→ (γ1a + ↑∗) 󳨀
→ (γ1a + 0) > 0 or (γ2a + ↑∗) 󳨀
→ (γ2a + 0) > 0,

(iv) − (ii) = −γ1a + α1 + ↑∗ = α1 + ↑ > 0 (by Theorem 3.10 and Lemma 4.20), or
(iv) − (iii) = −γ1a + α2 + ↑∗ = α2 − 2 ⋅ α1 + ↑
= (α2 − α1 ) − (α1 + ↓) = ↑∗ + β1 (from Theorem 4.10)
= (β2 − β1 ) + β1 = β2 > 0.

R
Thus we need only consider (iv) γ2a − γ1a 󳨀→ γ1a − γ1a = 0. This proves that 𝒟2 ≤ 0. If Right
is First in 𝒟2 , then he also has four options:

(i) γ1a − γ1a + ↓∗, (ii) γ2a + 0 + ↓∗, (iii) γ2a − γ1a + 0, and (iv) γ2a − γ1a + ∗.

In this case, it is left for the reader to show that option (i) is the single, dominant right
L
option. This done, we will have (i) ↓∗ 󳨀
→ 0, so that 𝒟2 ≥ 0. This establishes that 𝒟2 = 0.
In general, Left’s options for n ≥ 3 are

a
(i) 0 − γn−1 + ↓∗, (ii) γna − γn−2
a
+ ↓∗, and (iii) γna − γn−1
a
.

Here we claim that (ii) dominates both (i) and (iii) as a left option since
Playing Bynum’s game cautiously | 217

a a
(ii) − (i) = (γn−1 − γn−2 ∗) + γna ∗ > 0 (by induction)
a a
(ii) − (iii) = γn−1 − γn−2 + ↓∗ = 0 (by induction).

R
Finally, (ii) γna − γn−2
a a
→ γn−1
+ ↓∗ 󳨀 a
− γn−2 + ↓∗ = 0 and thus 𝒟n ≤ 0. In the proof
that 𝒟n ≥ 0, there are four right options among which ↓∗ is dominant (the details are
L
→ 0, we declare that γna − γn−1
omitted). Since ↓∗ 󳨀 a
= ↑∗ for n ≥ 2.

A proof of the following is omitted since it is similar to that of Corollary 4.13.

Corollary 4.22. The Uptimal+ Form is given by

∗ + (n + 1) ⋅ ↑ − 2 ⋅ ↑3/2 if n is odd,
γna = { 3/2
(n + 1) ⋅ ↑ − 2 ⋅ ↑ if n is even.

Thus we see that aw(γna ) = n + 1. Now, stated without proof, we have the following:

Corollary 4.23. Upper and lower bounds on γna are given by


(a) (n − 1) ⋅ ↑ < γna < (n + 1) ⋅ ↑ if n is even,
(b) (n − 1) ⋅ ↑ < γna < (n + 1) ⋅ ↑ + ∗ if n is odd.

The three sequences that we have seen thus far are related by the following corol-
lary. (Again, the details are omitted.)

Corollary 4.24. For n ≥ 1 and m ≥ 1, we have


a
(a) βn + γm = αn+m ,
a
(b) αn + αm = γn+m−1 + ∗.

Definition 4.25. The gamma-b sequence is given by

γ1b = {0 | γ1a , γ2a } and γnb = {0 | γn−1


b
} for n ≥ 2.

Lemma 4.26. For all n ≥ 1, we have


(a) γnb > 0 and γnb > ∗,
(b) γnb > ↑,
(c) γnb > γna ∗ > 0.

Proof. We present the proof of part (c) only. Set

b a b a
𝒟n = γn − γn ∗ = {0 | γn−1 } + {−γ n−1 ∗ | 0};

so, in particular, 𝒟1 = {0 | γ1a , γ2a } + {−α1 ∗, −α2 ∗ | 0}. For n = 1, Left’s winning is

L a a
→ {0 | γ1 , γ2 } + {0, ↓∗ | 0, ∗}.
𝒟1 󳨀 (Recall that −α1 ∗ = {0, ↓∗ | 0, ∗}.)
218 | L. R. Haff

This follows since Right’s options in the latter are

(i) γ1a − α1 ∗, (ii) γ2a − α1 ∗, (iii) γ1b + 0, and (iv) γ1b + ∗.

Left wins the first two of these by choosing 0 in −α1 ∗, and (iii) is positive from part (a).
Additionally, Left wins (iv) by playing in ∗. Hence 𝒟1 ⊳ 0.
If Right is First in 𝒟1 , then his options are (i) γ1a − γ1a ∗, (ii) γ2a − γ1a ∗, and (iii) γ1b + 0.
Left obviously wins (i) and (iii), and he wins (ii), since γna is virtually increasing. Thus
we have 𝒟1 ≥ 0, and hence γ1b > γ1a ∗.
L
→ γnb − γn−1
If Left moves first in 𝒟n , then we will have 𝒟n 󳨀 a
∗. Now Right can reply
b a b
with γn−1 − γn−1 ∗ > 0 (by induction) or γn > 0. Hence 𝒟n ⊳ 0. Otherwise, if Right is
First, then we can have

R b a L b a R
→ γn−1 − γn ∗ 󳨀
𝒟n 󳨀 → γn−1 − γn−1 ∗ = 0 (by induction) → γnb > 0.
or 𝒟n 󳨀

This shows that 𝒟n ≥ 0 and, finally, γnb > γna ∗.

Theorem 4.27. The sequence γnb is virtually increasing.

Proof. We claim that 𝒟n = γnb − γn−1


b
+ ∗ > 0. For n = 2, we have

b b b a a
𝒟2 = γ2 − γ1 + ∗ = {0 | γ1 } + {−γ1 , −γ2 | 0} + ∗.

First, 𝒟2 ⊳ 0 because Left will choose γ2b − γ2a + ∗ > 0 (where the inequality follows
from Lemma 4.26(c)). Otherwise, if Right is First, then his options are

L
(i) γ1b − γ1b + ∗ 󳨀
→ 0, (ii) γ2b − 0 + ∗ > 0, and (iii) γ2b − γ1b ,

where Left is seen to have a winning move in (iii) (following two more moves). Thus
𝒟2 ≥ 0 and, consequently, γ2b > γ1b ∗. Suppose that n ≥ 3. In this case, Left’s winning
move is given by

L b b b b b b
→ γn − γn−2 + ∗ = (γn − γn−1 ) + (γn−1 − γn−2 ∗),
𝒟n 󳨀

in which γnb − γn−1


b b
≥ 0 and γn−1 b
− γn−2 ∗ > 0. (The first inequality is immediate, and the
second one follows by induction.) Therefore 𝒟n ⊳ 0. It is easily shown that Right has
no winning moves in 𝒟n . Thus 𝒟n ≥ 0, and the proof is done.

The following lemma is easily verified by CGSuite, so its proof is omitted.

Lemma 4.28. We have γ1b ∗ = {0 | γ1a ∗, γ2a ∗} and γnb ∗ = {0 | γn−1


b
∗} for n ≥ 2.

Lemma 4.29. The first term of γnb is given by

γ1b = γ1a + α1 ∗ = 3 ⋅ ↑ + 3 ⋅ ↓3/2 .


Playing Bynum’s game cautiously | 219

Proof. Now we set

b a a a
𝒟 = γ1 − γ1 − α1 ∗ = {0 | γ1 , γ2 } + {−α1 , −α2 | 0} + {0, ↓∗ | 0, ∗}

and show that 𝒟 = 0. If Left is First, then she will not play in γ1b because if she does,
then Right will choose 0 in −α1 ∗. Thus we consider Left’s other options:

(i) γ1b − α1 − α1 ∗, (ii) γ1b − α2 − α1 ∗, (iii) γ1b − γ1a + 0, and (iv) γ1b − γ1a + ↓∗.

Here (i) is confused with (ii) from Lemma 4.7(d). Also, (i) is equal to (iii) from Lemma
4.20, and (ii) is equal to (iv) due to Lemma 4.20. Thus we need only consider
R
→ γ1a − α1 − α1 ∗ = 0 (by Lemma 4.20) and
(i) 󳨀
R
→ γ2a − α2 − α1 ∗ = 0 (by Corollary 4.13 and Corollary 4.22).
(ii) 󳨀

Thus 𝒟 ≤ 0. Now suppose Right is First. Then his options in 𝒟 are as follows:

(i) −α1 ∗, (ii) γ2a − γ1a − α1 ∗, (iii) γ1b + 0 − α1 ∗,


(iv) γ1b − γ1a + 0, and (v) γ1b − γ1a + ∗.

In this case, (i) clearly dominates (iii) as a right option. Additionally, (i) dominates
both (iv) and (v) as right options. In particular,

(i) − (iv) = α1 − γ1b (by Lemma 4.20)


< α1 − γ1a ∗ (by Lemma 4.26(c))
<0 (by Lemma 4.17(c))
and (i) − (v) = α1 − γ1b ∗ (by Lemma 4.20)
<0 (the inequality is left for the reader).

However, (i) and (ii) are confused, so we need to observe that

L L
(i) 󳨀
→0 and → γ2a − α2 − α1 ∗ = 0
(ii) 󳨀 (by Corollaries 4.12 and 4.22).

Now we have 𝒟 ≥ 0, and hence γ1b = γ1a + α1 ∗. The second equation in the lemma
follows from substitution by using previously established uptimal+ forms.

It is fairly obvious that the method we used to obtain the first differences, uptimal+
forms, and so on for all previous sequences can be used here as well. Consequently,
further properties of the γ b -sequence and of the six sequences defined further are all
stated without proof.

Theorem 4.30. We have γnb − γn−1


b
= ↑∗ for all n ≥ 2.
220 | L. R. Haff

Corollary 4.31. The Uptimal+ Form is given by

(n + 2) ⋅ ↑ − 3 ⋅ ↑3/2 if n is odd,
γnb = { 3/2
∗ + (n + 2) ⋅ ↑ − 3 ⋅ ↑ if n is even.

Hence aw(γnb ) = n + 2.

4.3 Nonlinear sequences of extended uptimals


The sequences defined above and those which follow were stated (are stated) mostly
in the order in which they occurred while translating CGSuite output. Whereas the
sequences αn , βn , γna , and γnb were seen to be linear, the next four are nonlinear. Also,
by comparison, the next four are strictly monotonic.

Definition 4.32. The gamma-c sequence is given by

γ1c = {γ3a ∗ | γ1a ∗, γ2a ∗} and γnc = {γ3a ∗ | γ1a ∗, γn−1


c
∗} for n ≥ 2.

Lemma 4.33. For n ≥ 1, we have


(a) γnc > 0 and γnc > ∗,
(b) ↑ < γnc < 2 ⋅ ↑ and 2 ⋅ ↑ < γnc ∗ < 3 ⋅ ↑,
(c) γnc ‖ γ1b but γnc < γ2b .

Lemma 4.34. The first term of γnc is given by

γ1c = γ2a + ↓2 ∗.

Theorem 4.35. For n ≥ 2, we have γnc − γn−1


c
= ↓n+1 .

Corollary 4.36. The γnc sequence is strictly decreasing.

Corollary 4.37. The Uptimal+Form is given by

γnc = ∗ + 4 ⋅ ↑ + ↓[n+1] − 2 ⋅ ↑3/2 .

In this case, aw(γnc ) = 3 for all n. Next, we have another strictly decreasing se-
quence.

Definition 4.38. The delta sequence is given by

δ1 = {α2 | ∗, α1 } and δn = {α2 | ∗, δn−1 } for n ≥ 2.

Lemma 4.39. For all n ≥ 1, we have


(a) δn > 0,
Playing Bynum’s game cautiously | 221

(b) δn || ∗ but 2 ⋅ δn > ∗,


(c) ↑ > δn > ↑2 for all n ≥ 1,
(d) δ1 < γnc .

Lemma 4.40. The first term of δn is given by

δ1 = α1 + ↓2 = ↑ + ↓2 − ↑3/2 .

Lemma 4.41. We have δn − δn−1 = ↑n+2 for n ≥ 1.

Corollary 4.42. The delta sequence is strictly increasing.

Corollary 4.43. The Uptimal+ Form is given by

δn = 2 ⋅ ↑ + ↓[n+1] − ↑3/2 for all n ≥ 1.

Hence aw(δn ) = 1, n ≥ 1.

Definition 4.44. The epsilon-a sequence is given by

ε1a = {0, ∗ | −α1 } and εna = {0, εn−1


a
| −α1 } for n ≥ 2.

The first term ε1a was given earlier by Theorem 3.9(b). We restate it below (in
Lemma 4.46) because it appears in the uptimal+ form.

Lemma 4.45. We have


(a) εna || 0, εna || ↑, εna < ⇑, and 2 ⋅ εna < ↑ for all n ≥ 1,
(b) εna || ↑m for all m ≠ n,
(c) εna > ∗ for all n ≥ 1,
(d) εna || δm for all m ≠ n.

Lemma 4.46. The first term of εna is given by

ε1a = β1 ∗ = ↑3/2 ∗.

Theorem 4.47. For n ≥ 2, we have εna − εn−1


a
= ↑n .

Corollary 4.48. The epsilon-a sequence is strictly increasing.

Corollary 4.49. The Uptimal+ Form is given by

εna = ∗ + ↓ + ↑[n] + ↑3/2 .

Hence aw(εna ) = 0 for all n.

Definition 4.50. The theta-a sequence is given by

θ1a = {γ1a ∗ | 0, α1 ∗} and θna = {γ1a ∗ | 0, θn−1


a
} for n ≥ 2.
222 | L. R. Haff

Lemma 4.51. Inequalities (a), (b), and (c) in Lemma 4.45 are also valid for θna with the
exception that 2 ⋅ θna > ↑. In addition, θna > εm
a
for all n ≠ m.

Lemma 4.52. The first term of θna is given by

θ1a = ∗ + α1 − ↑3/2 .

Lemma 4.53. For n ≥ 2, we have θna − θn−1


a
= ↓n .

Lemma 4.54. The θna -sequence is strictly decreasing.

Theorem 4.55. The Uptimal+ Form is given by

θna = ∗ + 2 ⋅ ↑ + ↓[n] − 2 ⋅ ↑3/2

Hence aw(θna ) = 1, for n ≥ 1.

4.4 Two sequences of uptimals (in the usual sense)


Finally, two additional sequences follow, in which the terms are not appended by ↑3/2 .

Definition 4.56. The epsilon-b sequence is given by

ε1b = {0, ↑2 ∗ | ↓} and εnb = {0, εn−1


b
| ↓} for n ≥ 2.

The terms of εa and εb both have “star-like” characteristics (vis-à-vis comparisons


with 0 and ↑), whereas each term of εna is greater than every term of εnb .

Lemma 4.57. Inequalities (a), (b), and (c) in Lemma 4.45 are also valid for εnb . In addi-
tion, εnb < εm
a
for all n ≠ m.

Lemma 4.58. The first term of εnb is given by

ε1b = ε3a − β1 = ∗ + ↓ + ↑[3] .

Theorem 4.59. For n ≥ 2, we have εnb − εn−1


b
= ↑n+2 .

Corollary 4.60. The Uptimal Form is given by

εnb = ∗ + ↓ + ↑[n+2] .

Hence aw(εnb ) = 0 for all n.

Definition 4.61. The theta-b sequence is given by

θ1b = {⇑ | 0, ↑∗} and θnb = {⇑ | 0, θn−1


b
} for n ≥ 2.
Playing Bynum’s game cautiously | 223

Lemma 4.62. Inequalities (a), (b), and (c) in Lemma 4.45 are also valid for θnb with the
exception that 2 ⋅ θnb > ↑. In addition, θnb > θna for all n ≠ m.

Lemma 4.63. The first term of θnb is given by

θ1b = ∗ + δ1 + ε = ∗ + ↑ + ↓2 .

Theorem 4.64. The Uptimal Form is given by

θnb = ∗ + 2 ⋅ ↑ + ↓[n+1] .

Hence aw(θnb ) = 1 for n ≥ 1.

4.5 Table of values for 2 ≤ n ≤ 6 and 2 ≤ m ≤ 12 and two


examples
The values of [n × m] (from (12.1)) for 2 ≤ n ≤ 6 and 2 ≤ m ≤ 12 are given in Table 12.1.
This table includes translations of CGSuite values into the terms of the sequences pre-
viously discussed. The translations were all done by hand; no computer code exists
for providing them.

Table 12.1: Values of [n × m] for n = 2, 3 and 2 ≤ m ≤ 12.

G Value aw(G) Class

2×2 ∗ 0 𝒩
2×3 ↑∗ 1 𝒩
2×4 ↓2 0 ℛ
2×5 ↑[2] ∗ 1 𝒩
2×6 ↑[3] ∗ 1 𝒩
2×7 ↑[4] ∗ 1 𝒩
2×8 ↑[5] ∗ 1 𝒩
2×9 ↑[6] ∗ 1 𝒩
2 × 10 ↑[7] ∗ 1 𝒩
2 × 11 ↑[8] ∗ 1 𝒩
2 × 12 ↑[9] ∗ 1 𝒩
3×3 ∗ 0 𝒩
3×4 ↓ −1 ℛ
3×5 −α1 −1 ℛ
3×6 −α2 −2 ℛ
3×7 ∗ 0 𝒩
3×8 ↓ −1 ℛ
3×9 −α1 −1 ℛ
3 × 10 −α2 −2 ℛ
3 × 11 ∗ 0 𝒩
3 × 12 ↓ −1 ℛ
224 | L. R. Haff

Table 12.2: Values of [n × m] for n = 4, 5, 6 and 2 ≤ m ≤ 2.

G Value aw(G) Class

4×4 ∗ 0 𝒩
4×5 ε1a 0 𝒩
4×6 {∗, ↑[2] | −α2 } −1/2 𝒩
4×7 {⇑ | ∗} 1 ℒ
4×8 {0, ↑∗ | ↓} 0 𝒩
4×9 {β2 | −α1 } 0 𝒩
4 × 10 {⇑ | ↑ ∗ || ↑2 ∗ ||| −α2 } −1/2 𝒩
4 × 11 {3 ⋅ ↑ | ↑ ∗ || ∗} 1 ℒ
4 × 12 {0, {⇑∗ | 0} | ↓} 0 𝒩
5×5 ∗ 0 𝒩
5×6 ↓ −1 ℛ
5×7 {γ1b ∗ | ∗} 1 ℒ
5×8 {0, {γ1a ∗ | 0, α1 ∗} | ↓} 0 ℛ
5×9 {{0, α1 ∗ | −α1 } | 0} 0 ℛ
5 × 10 {∗, {α1 ∗ | 0} | −α2 } −1 ℛ
5 × 11 {γ1b | α1 ∗ || ∗} 1 ℒ
5 × 12 {0, {γ1b | α1 , γ1a || α1 ∗ | 0} | ↓} 0 𝒩
6×6 ∗ 0 𝒩
6×7 {γ3a ∗ | ↑[4] } 5/2 ℒ
6×8 see below 1 ℒ
6×9 {α3 ∗ | ε6a } 3/2 ℒ
6 × 10 {α2 ∗ | −δ6 } 1/2 𝒩
6 × 11 see below 11/4 ℒ
6 × 12 see below 3/2 ℒ

Along with each value, the atomic weight and outcome class is stated. Recall that every
combinatorial game G is found in exactly one of four outcome classes:

G∈𝒩 if First can win; G ∈ ℒ if Left can win;


G∈𝒫 if Second can win; G ∈ ℛ if Right can win.

The following entries are a part of Table 12.2:

[6 × 8] = {GL | ε3b }, where GL = {γ3a ∗ | γ1a ∗, {α2 ∗ | 0, α1 ∗}},


[6 × 11] = {γ4b , H | ↑[8] }, where H = {α3 ∗ | α2 ∗ || α2 ∗ ||| α2 ∗},
L
[6 × 12] = {G | ε7b }, where GL = {γ4b | γ2b , γ1c ∗ |||{0 | α1 ∗ || α1 ∗ ||| α1 ∗} | α1 ∗ || α1 ∗}.

Example 4.65. Figure 12.2 shows the board position following 10 moves from a 13 × 16
starting position. Five components remain, and their corresponding atomic weights
are given. Left moved first, so, again, its Left’s turn.
Playing Bynum’s game cautiously | 225

Figure 12.2: The first 10 moves (indicated by shading).

The components shown in Figure 12.2 are

A = ↑[5] ∗, B = ↓, C = {−ε6a | −α3 ∗}, D = α1 , and E = α1 ;

so the present position is the sum S = A + B + ⋅ ⋅ ⋅ + E.


Since aw(S) = aw(A) + aw(B) + ⋅ ⋅ ⋅ + aw(E) = 1/2, it follows that Left has a winning
move; see [2, p. 236]. In this case, she can win by choosing the canonical left option of
C, namely, [9 × 3] + [9 × 2] = −ε6a , as depicted in Figure 12.3.

Figure 12.3: The Canonical Left Option of C = [9 × 6] : [9 × 6]L = [9 × 3] + [9 × 2] = α1 + ↓[6] ∗ = −ε6a .

The standard output for [9 × 6]L is given by

{{0 | ∗, ↑ || 0, {0 | ∗, ↑ || 0, {0 | ∗, ↑ || 0, {0 | ∗, ↑ || 0, {0 | ∗, ↑ || 0, {0 | ∗, ↑ || 0, ∗}}}}}}}.

By comparison with the previous expression, the information provided by its trans-
lation, [9 × 6]L = α1 + ↓[6] ∗, is immediately available. Indeed, such translations are
generally more tractable. For large positions, however, they are not readily obtained
by hand.
226 | L. R. Haff

The option [9 × 6]L specifies an overall position SL given by the sum of the follow-
ing:

A = ∗ + ↑[5]
B = ↓
L
C = ↑ + ↓[6] − ↑3/2
D = ↑ − ↑3/2
E = ↑ − ↑3/2 .

We obtain SL = ∗ + 2 ⋅ ↑ + ↓6 − 3 ⋅ ε, a position of atomic weight 2. Hence SL is a winning


position for Left.

Example 4.66. This example shows a portion of Table 12.2 for n = 5. Here the term
φ = {α1 | −α1 ∗} is used to achieve further simplification. In regards to φ, we find that
aw(φ) = 0, φ > 0, φ || ∗, and φ < ↑.

Table 12.3: Certain Entries from Table 12.2 for n = 5.

5×7 {γ1b ∗ | ∗} 1 ℒ
5×8 {0, {γ1a ∗ | 0, γ1a ∗}} 0 ℛ
5×9 {{0, α1 ∗ | −α1 } | 0} 0 ℛ
5 × 10 {∗, {α1 ∗ | 0} | −α2 } −1 ℒ

Upon translation of the entries in Table 12.3, we obtain certain “extensions of extended
uptimals,” namely, [5 × 7] = (↑ − ↑3/2 ) + φ, [5 × 8] = (∗ − ↑3/2 ) + φ, [5 × 9] = ∗ + φ, and
[5 × 10] = ↑ + φ.

5 Values of larger positions


Values of [n × m] follow for 7 ≤ n ≤ 12 and 2 ≤ m ≤ 12. All of these values have
been translated (likewise, by hand) into the sequential terms that were featured in
Section 4. However, these particular expressions become exceedingly complex and
are not included here. Nevertheless, they are posted in [5].
A peculiarity was introduced by the square positions that we have been unable to
explain. Note that the values of [n × n], 2 ≤ n ≤ 12, are all equal to ∗ = {0 | 0} with one
exception. Specifically, in standard terms, we find that

[10 × 10] = ±({{0 ||| 0 || 0 | {0, ∗ | 0, ↑∗}, {0 || 0, ∗ | 0, ↑∗} |||| ⇑∗, {0 | {0, ∗ | 0, ↑∗},
{0 || 0, ∗ | 0, ↑∗}}} ||| {0 ||| 0 || 0, ∗ | 0, ↑∗ |||| {0, ∗ | 0, ↑∗}, {0 || 0, ∗ | 0, ↑∗}} | 0,
Playing Bynum’s game cautiously | 227

{0 ||| 0, ∗ || ∗, ↓ | 0 |||| {∗, ↓ | 0}, {0, ∗ || ∗, ↓ | 0}} || {0 | ∗, ↑ || 0, {0 | ∗, ↑ || 0, ∗}


|||| 0, ↓ ∗ | 0, ∗ ||| 0, ↓ ∗ | 0, ∗ || 0}}, {{0 ||| 0 || 0 | {0, ∗ | 0, ↑∗}, {0 || 0, ∗ | 0, ↑∗}
|||| {0 | {0, ∗ | 0, ↑∗}, {0 || 0, ∗ | 0, ↑∗}}, {0 ||| 0 || 0 | {0, ∗ | 0, ↑∗},
{0 || 0, ∗ | 0, ↑∗} |||| {0 | {0, ∗ | 0, ↑∗}, {0 || 0, ∗ | 0, ↑∗}},
{0 ||| 0 || 0 | {0, ∗ | 0, ↑∗}, {0 || 0, ∗ | 0, ↑∗} |||| {0 | {0, ∗ | 0, ↑∗},
{0 || 0, ∗ | 0, ↑∗}}, {0 ||| 0 || 0 | {0, ∗ | 0, ↑∗}{0 || 0, ∗ | 0, ↑∗}
|||| {0 | {0, ∗ | 0, ↑∗}, {0 || 0, ∗ | 0, ↑∗}}, {0 ||| 0 || 0 | {0, ∗ | 0, ↑∗},
{0 || 0, ∗ | 0, ↑∗} |||| {0 | {0, ∗ | 0, ↑∗}, {0 || 0, ∗ | 0, ↑∗}},
{0 ||| 0 || 0 | {0, ∗ | 0, ↑∗}, {0 || 0, ∗ | 0, ↑∗} |||| {0 | {0, ∗ | 0, ↑∗},
{0 || 0, ∗ | 0, ↑∗}}, {0 || 0 | {0, ∗ | 0, ↑∗}, {0 || 0, ∗ | 0, ↑∗}}}}}}}} | ∗})

which translates reasonably well as

[10 × 10] = ±{H, {−γ c6 ∗ | ∗}}, in which H = {J ||| K | 0, L || M},


where J = {−γ a3 ∗ | ⇑∗, −γ a1 ∗}, K = {α3 ∗ | α1 ∗, α2 ∗}, L = {0 | ε1a || −α1 , ε1a },
and M = {−ε2a || −α1 ∗ | −α2 ∗}.

It does happen that [10 × 10] has “star-like” characteristics; namely, it is confused
with 0, ∗, and ↑ and is less than 2 ⋅ ↑. We mention in passing that this position is
conspicuous in at least one other way. Namely, [10×10] has four winning moves (so two
of them are necessarily dominated options). Only the fifth column (row) happens to be
a losing move for Left (Right). The winning moves for Left are illustrated in Figure 12.4.

Figure 12.4: Left’s (4) winning moves from [10 × 10] (depicted by grey bars).

On the other hand, all other [n × n] positions for 2 ≤ n ≤ 12 admit exactly one winning
move.
For square positions [n × n], n ≥ 12, our purpose was to simply evaluate the square
positions. With the [10 × 10] case in mind, we were looking for further instances in
which such positions were unequal to star. No further attempt was made to analyze
228 | L. R. Haff

the followers of these positions. (The [18 × 18] case alone would require 1,832 pages of
MSWord to print out!)
Similarly to the [10 × 10] case, the values of [14 × 14] and [18 × 18] are also unequal
to star; moreover, they have the same star-like characteristics. Is there a pattern here?
It is yet to be determined whether or not [22 × 22], [26 × 26], . . . are also unequal to star.

Bibliography
[1] M. H. Albert, R. J. Nowakowski, and D. Wolfe, Lessons in Play, An Introduction to Combinatorial
Game Theory, 2nd edition. CRC Press, Boca Raton, FL.
[2] E. R. Berlekamp, J. H. Conway, and R. K. Guy, Winning Ways, 2nd edition. A. K. Peters, Ltd.,
Wellesley, MA, 2001.
[3] J. H. Conway, On Numbers and Games, 2nd edition. A. K. Peters, Ltd., Natick, MA, 2001.
[4] L. R. Haff and W. J. Garner, An Introduction to Combinatorial Game Theory. Lulu Press, 2018.
[5] https://www.math.ucsd.edu/~haff/.
[6] N. A. McKay, Canonical forms of uptimals, Theoret. Comput. Sci., 412(2011): 7122–7132, 2011.
[7] A. N. Siegel, CGSuite. Combinatorial games suite computer program, 2004.
[8] A. N. Siegel, Combinatorial Game Theory, Graduate Studies in Mathematics, Vol. 146. American
Mathematical Society, Providence, RI, 2013.
Melissa A. Huggan and Craig Tennenhouse
Genetically modified games
Abstract: Genetic programming is the practice of evolving formulas using crossover
and mutation of genes representing functional operations. Motivated by genetic evo-
lution, we introduce and solve two combinatorial games, and we demonstrate some
advantages and pitfalls of using genetic programming to investigate Grundy values.
We conclude by investigating a combinatorial game whose ruleset and starting posi-
tions are inspired by genetic structures.

1 Introduction
The fundamental unit of biological evolution is a gene, which represents a small piece
of information, and the genome is a collection of genes that encodes an organism’s
complete genetic information. Within the context of biological evolution, the genes
of the most fit organisms survive and are passed on to the next generation, with their
chromosomes modifying over time to better fit their environment through compe-
tition. This modification occurs through the processes of mutation and crossover,
wherein individual genes are altered and pairs of chromosomes trade information,
respectively, as organisms pass down their genetic information to their progeny (see
Figure 13.1).

Figure 13.1: A pair of chromosomes undergoing crossover and then mutation. In general, as exempli-
fied by a grey gene, this process is not restricted to two values.

This set of mechanisms in biological evolution has been coopted as a model for al-
gorithmic development of heuristic solutions to a variety of problems, like antenna

Acknowledgement: Melissa A. Huggan was supported by the Natural Sciences and Engineering Re-
search Council of Canada (funding reference number PDF-532564-2019).

Melissa A. Huggan, Department of Mathematics, Ryerson University, Toronto, Ontario, Canada,


e-mail: melissa.huggan@ryerson.ca
Craig Tennenhouse, School of Mathematical and Physical Sciences, University of New England,
Biddeford, Maine, USA, e-mail: ctennenhouse@une.edu

https://doi.org/10.1515/9783110755411-013
230 | M. A. Huggan and C. Tennenhouse

design [14], the traveling salesman problem [7], and graph coloring [10]. In these prob-
lems a chromosome encodes information about the structure and properties of work-
ing solutions. These solutions are the results of genetic algorithms. When the chromo-
some instead represents a function or program, the process is called genetic program-
ming. Genetic programming is often used when a user has a collection of data points
and is looking for a function to fit them. The fitness of a particular program is there-
fore related to the error between the data points and the program. This mechanism
is similar to that of regression in statistical methods (Figure 13.2). Given a set of data
points, both tools are used to minimize the error between these data points and the as-
sociated function values. Statistical regression requires a predefined model in which
the coefficients are optimized. In genetic programming the function itself is iteratively
adjusted, via specified operations called crossover and mutation, using a pool of el-
ementary functions and constants. No predefined model is necessary. If the resulting
function better fits the data, then the change is likely to be accepted, and the process
continues. The process stops evolving based on a predetermined number of iterations
or when a particular fitness threshold is reached.

Figure 13.2: A curve with poor fit to data points (left) and another with much better fit (right).

There are a number of different structures used in genetic programming to represent a


chromosome, the simplest being a linear structure and a tree structure. A linearly or-
ganized chromosome can be visualized much like a biological chromosome (see Fig-
ure 13.1). One example of such a structure is in [16], which introduces multiexpression
programming. Chromosomes represented by trees have some advantages over the lin-
ear approach, and we will discuss them further in Section 1.1. As the structures rep-
resenting a chromosome undergo crossover and mutation, the functions associated
with them exhibit smaller errors and better fit the goal data.
Genetic algorithms have been applied to combinatorial games (see [6, 13, 18]).
However, these efforts have been focused on using genetic programming to develop
strategies rather than finding a formula for the Grundy values. We are interested in
examining whether genetic programming could be a useful tool for determining val-
ues of combinatorial game positions. The only model for this type of project in the
literature is in [16], which uses the multiexpression programming model with a lin-
ear chromosome. This method does not precisely fit our needs, as the author of that
Genetically modified games | 231

paper focuses on the outcome class classification problem instead of Grundy values
and restricts their investigation to nim. However, the project and its success serve as a
strong motivator for the application of genetic programming to combinatorial games,
and we hope we have done it justice in extending their results and adding to the body
of work combining these two mathematical endeavors. An automated conjecturing
tool was examined in [6] within the context of the game chomp, also focused on out-
come classes.
Recall that the Grundy value of an impartial game position is the smallest non-
negative integer not included in the Grundy values of its options [19, 11]. For more
information about combinatorial game theory, see [1, 2, 3, 4, 5, 8]. In this project, we
generate data points of the form (H, g), where H ∈ ℤn is a list of integers representing
a game position, and g ∈ ℤ is the associated Grundy value. A key challenge of this
project is the fact that heuristics are not often useful for calculating Grundy values.
In fact, either a function completely determines the value of a game, or it is incorrect.
This leaves us with the difficult task of devising a fitness function that represents dis-
tance, not a natural concept in the space of impartial game values and at the same
time leads to eventual convergence with an error of zero.
The paper proceeds as follows. We first give necessary background for genetic pro-
gramming, framed within the context of the Python package gpLearn and the method-
ology for the conducted experiments. We then introduce, in sequence, three impartial
rulesets motivated by genetic structures. The first two games use simplified structures
to allow for computational analysis. In Section 2, we utilize gpLearn as a tool to help
conjecture a pattern within the sequence of game values. This then points us in the
direction of the known game kayles, to which we prove its equivalence. Next, in Sec-
tion 3, we introduce a two-point crossover game and again implement our computa-
tional methods to obtain conjectured formulas for Grundy values based on the num-
ber of heaps. Although this alone does not prove the generalized formula, the program
provides a direction for the structure of a general formula, which we prove in Theo-
rem 3.2. Lastly, we examine a game whose ruleset is inspired by more typical genetic
changes and includes both crossover and mutation moves. In Theorem 4.3, we prove
its equivalence to a subset of arc kayles positions, and in Theorem 4.5, we prove their
game values. We conclude with future directions.

1.1 Genetic programming: methods


Since we are interested in data points that are computationally inexpensive to deter-
mine, we have chosen to use the Python package gpLearn [20]. This package uses the
tree model of chromosome representation, introduced in the introduction to Section 1,
wherein each leaf is associated with a primitive (a constant or a single input parame-
ter), and each internal node is associated with a function on its child node(s). The root
232 | M. A. Huggan and C. Tennenhouse

node is therefore recursively associated with a single function on the set of primitives
(see Figure 13.3).

ln(2+x)
Figure 13.3: A chromosome in tree format representing the function f (x, y) = x(0−y)
.

Mutation is represented by pseudorandomly replacing a node with a different function


or primitive (or one of another set of mutation-like actions) as appropriate. Crossover
between chromosomes is enacted by swapping subtrees.
For the games in Sections 2 and 3, we examined a number of different sets of hy-
perparameters, and for both convergence and computation time, the most reasonable
were heap sizes up to 10 on positions with anywhere from one to five heaps.
The package gpLearn is intended for fitting real-valued functions of several real
variables to data points using standard elementary functions and binary operations
over the reals. We modified the default set of functions to instead focus on discrete
functions of several discrete variables. We wrote and included the following binary
and unary operations, which operate bitwise on integer inputs: XOR, AND, OR, and NOT.
We also included MOD, LOG2 , and PLUS1, whose operations are self-explanatory. Finally,
we introduced logical operators EQUAL, LESS, and GREATER to return 0 for False and 1
for True. Default functions included SUB for subtraction, ADD, TIMES, and DIVIDE. Our
fitness function computed the total absolute difference between each genetic program
and the computed Grundy values, so that a lower fitness value represents a better fit,
although we experimented with measuring distance using the nim-sum.
Populations ranged from 1,000 to 10,000 individual programs, and we restricted
most runs to 20 generations. Elites, relatively highly fit programs in each generation,
were retained unmodified between generations. We also experimented with rates of
mutation, settling on higher values to prevent getting stuck in local minima.

2 A single-point crossover and mutation game


There are two primary methods of crossover used in genetic algorithms, one-point and
two-point. For the former, consider a pair of bit strings of length n, B1 = (a1 , . . . , an ) and
B2 = (b1 , . . . , bn ). An integer k ∈ [1, n) is chosen pseudorandomly, and the substrings
Genetically modified games | 233

(a1 , . . . , ak ) and (b1 , . . . , bk ) are swapped, leading to the new bit strings

B′1 = (b1 , . . . , bk , ak+1 , . . . , an ), B′2 = (a1 , . . . , ak , bk+1 , . . . , bn ).

After crossover, there is a possible mutation, depending on the chosen mutation rate,
turning, say, B′1 into B′′
1 = (b1 , . . . , bi−1 , 1 − bi , bi+1 , . . . , bk , ak+1 , . . . , an ).
Motivated by these processes, we define a new impartial combinatorial game ga1.
To simplify both rules and analysis, we define a position as a single bit string. A muta-
tion move flips a single bit in the string, and since there is no real crossover in a single
string, we consider the flip of a sequence of bits to be representative of this operation.

Ruleset 2.1 (ga1). A position in ga1 is a bit string of length n. There are two move op-
tions. Crossover consists of choosing an integer k, 1 ≤ k ≤ n − 1, wherein all bits from
position 1 to k are flipped. A mutation move is simply the flip of any single bit in the
string. A move is legal only if the total number of substrings of the form 01 and 10
increases.

This latter restriction, that “disorder” increases, serves two purposes. Firstly, it
ensures that the game ends in a finite number of moves. Secondly, it represents the
tendency of chromosomes to combine in ever more complex ways over time. We define
the condition of increasing substrings 01 and 10 formally as follows.

Definition 2.2. The entropy of a bit string game is the number of substrings of the form
01 and 10.

The game ga1 is equivalent to a heap game in the following way. If we consider a
run in a bit string to be a maximal substring consisting of all 0s or all 1s, then any bit
string can be converted into a list of integers representing run sizes. For example, the
string 001000111 becomes (2, 1, 3, 3). Although this representation loses information
about which bits are associated with each integer, the symmetry of the ruleset makes
this lost information unnecessary to the game analysis. We can simplify the ruleset
further by Proposition 2.3.

Proposition 2.3. If H = (h1 , . . . , hk ) is a list of heaps representing a bit string in ga1, then
the following hold.
(1) Any heap equal to 1 can be removed.
(2) The order of the heaps does not affect the Grundy value.
(3) Each move is equivalent to one of the following.
(a) Split any heap hi > 3 into two heaps of size at least 2 each.
(b) Remove 1 from any heap hi ≥ 3.
(c) Remove 1 from any heap hi ≥ 5 and split the remainder into two heaps of at
least 2 each.
(d) Remove any heap of size 2 or 3.
234 | M. A. Huggan and C. Tennenhouse

Proof. We will prove each part of Proposition 2.3 separately.


1. A single bit between two runs of the opposite value or at the end of a string is
represented by a heap of size 1. No move that increases entropy has an effect on
this heap, and thus its removal does not affect the game play or the Grundy value
of the position. Thus it can be removed.
2. Say that heaps hi and hj switch positions in H resulting in H ′ . Any mutation or
crossover point k chosen within hi in H is equivalent to the index k ′ in H ′ , where
k ′ = k − (hj + hj+1 + ⋅ ⋅ ⋅ + hi−1 ) if hj precedes hi in H, and k ′ = k + (hi+1 + ⋅ ⋅ ⋅ + hj−1 + hj )
otherwise. This results in identical game play. Therefore the order of the heaps in
H does not affect the game value.
3. For the move equivalences, note that to make a legal crossover move in ga1,
a player must choose the crossover point in the midst of a run. This effectively
splits the run into two and leaves the others alone (other than switching the bits
in the affected substring). This is equivalent to splitting a heap into two, and if
one or both of the resulting heaps have size 1, then they can be removed from play.
Similarly, a legal mutation move must also occur in the midst of a run, splitting
a heap into either two or three with at least one heap of size 1. Again, these size 1
heaps can be removed.

As a direct result of Proposition 2.3, we need only consider single heap positions,
since the Grundy value of a list of heaps is equal to the nim-sum of the Grundy values
of the individual heaps.
The package gpLearn was employed as described in Section 1.1. In particular,
game values were computed using positions of the form (h1 , h2 , . . . , hn ) with varying
values for the number of heaps n and each heap hi . These solutions were set as ground
truth for the genetic programming implementation. The system generated random
chromosomes, each associated with a function from ℕn to ℕ, and used crossover and
mutation to progressively reduce the error between the values of these functions and
the ground truth values. Although no exact formula was found, after 14 generations,
a local minimum on the total absolute error was reached. Modifying hyperparameters
and running for other 7 generations led to the formula

MOD(1+h,MOD(h+1,3) + 1) - MOD(h-1,4) + MOD(1+h,3) + 4,

where h is the size of a single heap and MOD(x,n) represents x (mod n). While not a
particularly accurate formula, we do see the presence of both modulo 3 and modulo 4.
This leads us to examine the exact computed Grundy values closely for periodicity of
order twelve and find a striking similarity with the values of the combinatorial game
kayles.

Ruleset 2.4 (kayles [9]). In kayles a player may remove one or two stones from any
heap, and if any stones remain, then these may be split into two heaps.
Genetically modified games | 235

kayles has octal code 0.77 [8] and has been well studied. In particular, it is known
that the Grundy values for a single heap game of kayles of size n is periodic with
period 12 after n = 71 [12].

Theorem 2.5. The Grundy value of a single heap game of size n in ga1 is equal to the
value of a heap of size (n − 1) in kayles.

Proof. This is easy to compute for n ≤ 3. If n ≥ 4, then the options are {(j, n − j) : 2 ≤ j ≤
(n−j)}∪{(k, n−k−1) : 2 ≤ k ≤ n−k−1}∪{(n−1), (n−2)}. The options for an (n−1)-sized heap
in kayles are {(j, n−j−2) : 1 ≤ j ≤ (n−j−2)}∪{(k, n−k−3) : 1 ≤ k ≤ n−k−3}∪{(n−2), (n−3)}.
We can therefore consider a move in ga1 to be equivalent to the following process:
1. Remove a stone from a heap,
2. Make a kayles move in the resulting heap of size (n − 1), and
3. Add a stone back to all resulting heaps.

Therefore the game ga1 reduces to a game of kayles, and thus the Grundy values are
computable in the same manner as those for kayles.
Although our genetic programming method did not determine a precise error-free
formula for the Grundy values of ga1, it did inform our understanding of the game by
pointing us to its periodic nature.

3 A two-point crossover game


Next, we consider a similar impartial game based on genetic crossover, this time using
two positions instead of one. Consider a pair of bit strings

B1 = (a1 , . . . , an ) and B2 = (b1 , . . . , bn ).

If 1 ≤ x < y ≤ n are integers, then the two-point crossover using positions x and y
results in the bit strings

B′1 = (a1 , . . . , ax−1 , bx , . . . , by−1 , ay , . . . , an )

and

B′2 = (b1 , . . . , bx−1 , ax , . . . , ay−1 , by , . . . , bn ),

that is, a substring with matching indices from each bit string is swapped. We wish
to define an impartial game motivated by two-point crossover as a move mechanic.
As we did with ga1, we play only in a single bit string. This also means that defin-
ing mutation-type moves is redundant since any such move would be equivalent to
crossover with x = y − 1.
236 | M. A. Huggan and C. Tennenhouse

Ruleset 3.1 (ga2). A position in ga2 is a bit string of length n. On their turn a player
chooses two integers x, y ∈ [1, n], x < y, wherein all bits from position x to (y − 1) are
flipped. A move is legal only if the total number of substrings of the form 01 and 10
increases.

As with ga1, we can reduce ga2 to a game on heaps. Note that, again, a run of bits
can be represented by an integer. A legal move requires that at least one of {x, y} is
chosen within a run. The possible options are as follows:
1. Both x and y are within the bounds of a single run, equivalent to splitting a single
heap into three heaps.
2. x and y are each within the bounds of different runs, equivalent to splitting any
two heaps into two.
3. x and y are chosen so that exactly one single heap is split into two.

Just as with Proposition 2.3, we see that heaps of size 1 are negligible, as is the order
of the heaps. However, since players can alter multiple heaps in a single move, we
cannot compute the Grundy value by simply computing the nim-sum of the Grundy
values of single heap games.
As in Section 2, we applied gpLearn with the modified function list to computa-
tionally determined Grundy values, without first examining these values. Once the
number of nonzero heaps was included as a primitive value in games with more than
two heaps (e. g., (3, h1 , h2 , h3 ) is a three-heap game, whereas (4, h1 , h2 , h3 , h4 ) represents
a four-heap position), genetic programming proved much more successful, yielding
the formulas below with 100 % accuracy:
1. For a single-heap game with heap size h,
MOD(SUB(h,1),PLUS1(PLUS1(1))),
which is equivalent to (h − 1) (mod 3).
2. With two heaps h1 , h2 ,
MOD(PLUS1(SUB(ADD(h1 , h2 ), XOR(h1 , h1 ))), PLUS1(PLUS1(EQUAL(h1 , h1 )))),
which is equivalent to (h1 + h2 + 1) (mod 3).
3. For a three-heap game with inputs 3, h1 , h2 , h3 , we found
MOD(ADD(ADD(h3 , h1 ), h2 ), ADD(3, SUB(0, 0))),
which reduces to (h1 + h2 + h3 ) (mod 3).

Although these results themselves do not provide a generalized formula, they do gen-
eralize easily to the following:

Theorem 3.2. Let H = (h1 , . . . , hn ) be an n-heap position in ga2, and let t be the smallest
nonnegative integer such that (n + t) ≡ 0 (mod 3). Then the Grundy value of H is (t +
∑ni=1 hi ) (mod 3).
Genetically modified games | 237

Proof. Note first that although we can eliminate heaps of size 1 in our analysis of ga2
just as we did in Proposition 2.3 for ga1, we are not compelled to do so. In fact, not
removing them makes for a simpler analysis here.
In the case of a single stone, it is clear that the Grundy value is 0 as no moves are
possible. It is also easy to see that the claim holds when all heaps have size 1 except
possibly a single heap of size 2, so we need only consider the remaining cases. We
proceed now by minimum counterexample. Assuming that the claim is false, let m be
the smallest integer such that not all games on m-many stones follow the statement
of the theorem. Among all such games with m stones, let H = (h1 , . . . , hj ) be a position
with the greatest number of heaps.
For a positive integer x, let {x1 , x2 } and {x1 , x2 , x3 } represent all sets of positive inte-
gers satisfying x1 + x2 = x and x1 + x2 + x3 = x. For any i, k with 1 ≤ i < k ≤ j, the options
of H are H \ {hi } ∪ {hi1 , hi2 }, H \ {hi } ∪ {h1i , h2i , h3i }, and H \ {hi , hk } ∪ {hi1 , hi2 , hk1 , hk2 }, i. e.,
all positions in which any one heap hi of sufficient size is removed and replaced with
two or three heaps whose sum is hi , and those in which any two heaps hi , hk ≥ 2, are
removed and each replaced with two heaps whose sums are hi , hk , respectively.
All options contain m total stones since we have not removed any. Further, every
option has more heaps than H has, and therefore by the choice of minimal counterex-
ample, adhere to the statement of the claim. Thus their Grundy values are all equal
to (m + t − 1) (mod 3) and (m + t − 2) (mod 3). If there is a heap of size at least three,
then it can be split into two heaps via a crossover move or three heaps via mutation
(changing only one bit). Similarly, if there are two heaps of size at least two, then these
can become four heaps through crossover or three heaps through mutation. Therefore
H has options with one more and two more heaps, and hence both values (m + t − 1)
(mod 3) and (m + t − 2) (mod 3), respectively, are present among the options. Thus
H must have the Grundy value (m + t) (mod 3), contradicting the choice of H as a
minimal counterexample.

Once again, as with ga1, our genetic programming implementation informed


a conjecture about the Grundy values of ga2, which we were then able to prove. In
this case the formula provided by the system was exact.

4 The crossover-mutation game


To represent both crossover and mutation more accurately, we now consider a game
played on a pair of bit strings.

Ruleset 4.1 (crossover-mutation (cm)). A position in crossover-mutation is a pair


of bit strings of length n, B1 = (a1 , . . . , an ) and B2 = (b1 , . . . , bn ). There are two move
options. Crossover consists of choosing an integer k, 1 ≤ k ≤ n − 1, wherein all bits 1
238 | M. A. Huggan and C. Tennenhouse

Figure 13.4: Example of an arc kayles position, which is equivalent to a position in crossover-
mutation.

to k from B1 are swapped with the bits 1 to k of B2 , in particular, leading to the new bit
strings

B′1 = (b1 , . . . , bk , ak+1 , . . . , an ), B′2 = (a1 , . . . , ak , bk+1 , . . . , bn ).

Mutation involves choosing a single gene ci from either of the bit strings and flipping
it to 1 − ci . In both cases the move is legal if the total number of substrings of the form
01 and 10 increases.

A position in crossover-mutation is composed of a pair of binary strings, in con-


trast to the single string for each of ga1 and ga2. This results in a higher-dimensional
domain space for game positions, even when binary strings are converted to heaps
as above. Our genetic programming approach proved unable to account for so many
inputs, and hence we used standard CGT methods.
Firstly, we note a reduction to a known combinatorial game ruleset. All positions
of crossover-mutation are equivalent to certain positions from another game called
arc kayles. We first present the ruleset and then prove the equivalence.

Ruleset 4.2 (arc kayles [17]). Let G be a graph. On a player’s turn, they remove an
edge of G along with all edges incident to it.

Theorem 4.3. Let G be a cm position. Then G is equivalent to an arc kayles position.

Proof. Let GCM be a cm position of length n as B1 = (a1 , . . . , an ) and B2 = (b1 , . . . , bn ).


We will first construct the arc kayles position GAK . Then we will prove its equivalence
by showing that there is a bijection between the options of the games.
First, consider B1 of GCM . For each mutation, its representation in GAK is an edge.
Edges are incident in GAK if the corresponding bits in GCM were adjacent in B1 . Sim-
Genetically modified games | 239

Figure 13.5: Mutation: The result of a mutation move at ai in crossover-mutation (top right) and
the corresponding resulting position after a move in arc kayles (top left). Crossover: The result of a
crossover move at i − 1 in crossover-mutation (bottom right) and the corresponding resulting posi-
tion after a move in arc kayles (bottom left). Dashed lines represent the edges that were eliminated
by moving in arc kayles.

ilarly for B2 . We label the edges of GAK by the corresponding bit labels in B1 or B2 ,
respectively. For the crossover moves in GCM , if there exists a crossover move at ai ,
ai+1 and bi , bi+1 , then in GAK , there are vertices, call them vai ,ai+1 and vbi ,bi+1 , between
respectively labeled edges. Denote the edge connecting vai ,ai+1 and vbi ,bi+1 by ei,i+1 . See
Figure 13.4 for an example of the equivalence, in which we have represented the bits
as colored and uncolored vertices.
To show that GAK is equivalent to GCM via this construction, we need to show that
there exists a bijection between the options. In particular, that GCM − GAK = 0. Since
the rulesets are impartial, we consider GCM + GAK . Suppose the first player moves in
GCM with a mutation at ai . By the existence of this mutation this means that both ai−1
and ai+1 were the same as ai (if they exist); otherwise, the entropy would not have
increased. After the turn, neither can be mutated thereafter because again, it would
not increase the entropy. Also, this move disallows future crossover at ai because it
will not increase the entropy. Player 2 responds by removing the edge ai ∈ GAK . This
has the effect of removing all incident edges, in particular, ai−1 , ai+1 , ei−1,i , and ei,i+1 if
they exist (see Figure 13.5). If instead Player 1 chose a crossover move in GCM at position
i−1, this eliminates the possibility of future mutations at positions ai , ai−1 , bi , and bi−1 .
The corresponding move for Player 2 is to respond in GAK by removing the edge ei−1,i ,
which also removes all edges ai , ai−1 , bi , and bi−1 (see Figure 13.5).
240 | M. A. Huggan and C. Tennenhouse

Figure 13.6: An arc kayles position, equivalent to a position in crossover-mutation when h0 is


present. Here the dashed line represents an edge that may or may not be present.

If instead Player 1 moved in GAK , we simply reverse the roles in the above argument,
and Player 2 will always have a response. Thus Player 2 will win this game under nor-
mal play. Hence GCM and GAK are equivalent.

If the cm position is of a certain form, in particular, every entry ai of B1 is the same,


and every entry of B2 is 1 − ai , then the proven equivalence to a subset of arc kayles
positions allows us to immediately deduce the game values.

Theorem 4.4 ([4]). Let G be a position in arc kayles in the form of a 2 × n grid graph.
Then G has value 0 if n even and value ∗ if n odd. Furthermore, this game value does not
change under the addition of up to two tufts (i. e., induced stars whose center is a vertex
of the grid graph).

We can extend this result to find values for other cm positions.

Theorem 4.5. Let G(k) be a position in arc kayles in the form of a 2 × k grid graph with
pendant edges adjacent to 3 or 4 of the four corners (see Figure 13.6). Then G(2k + 1) has
the game value ∗2 if k ∈ {0, 1} and ∗ if k ≥ 2, and G(2k) has the value 0 for all k ≥ 1 when
h0 is present.

Proof. Note that if k ≤ 1, then the possible values of G(2k + 1) are easily demonstrated
by exhaustion. The value of G(2k) is just as easily found to be in 𝒫 by considering
an involution strategy, whereby the second player responds to a play on edge e with
a play on the edge equivalent to e under 180° rotational symmetry. We now proceed
by induction on k to find the remaining values of G(2k + 1) whether or not edge h0 is
present.
Let e be an edge in G(2k + 1), and consider H(e) to be the option yielded by play
on e (see Figure 13.7). We demonstrate that no option of G(2k + 1) has value ∗.
H(h1 ) Play on edge x results in a graph of the form 2×(2k −1) with three pendant edges.
If k is sufficiently large, then this graph has value ∗ by inductive assumption,
and hence H(h1 ) does not have value ∗. Otherwise, the value can be checked
exhaustively for the base case of G(5) when k = 2 to have value ∗ with or without
the presence of h0 . Hence H(h1 ) does not have value ∗.
H(h2 ) Play on the edge x results in a position with value ∗ by Theorem 4.4. Therefore
H(h2 ) does not have value ∗.
Genetically modified games | 241

Figure 13.7: The options of G(2k + 1) from Figure 13.6.

H(h3 ) If h0 is not present, then play on edge y yields a path with value ∗ disconnected
from a 2 × (2k − 2) grid graph with two pendant edges, which by Theorem 4.4
has value 0. If h0 is present, then play on edge z yields the sum of a small graph
with value ∗ and a 2 × (2k − 4) grid graph with two pendant edges. In both cases,
the resulting sums are ∗. Therefore H(h3 ) does not have value ∗.
H(h4 ) Here h4 can be any horizontal edge to the right of h3 . Play on edge w results in
a game with a sum of two positions with opposite parity and hence has value
∗ + 0 = ∗ by Theorem 4.4, so H(h4 ) does not have value ∗.
H(v1 ) This graph has value 0 by Theorem 4.4.
H(v2 ) If h0 is present, then we have the sum of a path with value ∗2 and a game with
value ∗ by Theorem 4.4. If h0 is not present, then the path has value ∗. So H(v2 )
has value ∗3 or 0.
H(v3 ) We invoke Theorem 4.4 yet again, as the resulting graph is a pair of grid graphs
with one or two pendant edges each, both with value ∗ or both with value 0.
Therefore H(v3 ) has value 0.

Since no option of G(2k + 1) has value ∗ and G(2k + 1) ∈ 𝒩 , we see that it has value ∗
for k ≥ 2.

Theorem 4.5 leads directly to the following corollary about a family of crossover-
mutation positions.

Corollary 4.6. The cm game composed of a length-n string of all 1s and a length-n string
of all 0s has value 0 if n is odd, ∗2 if n ∈ {2, 4}, and ∗ otherwise.

Proof. This position is equivalent to the arc kayles position G(n − 1) with h0 present,
as indicated in Theorem 4.5.

It turns out that cm is also closely related to another well-studied game.


242 | M. A. Huggan and C. Tennenhouse

Ruleset 4.7 (cram [4]). In the impartial game cram, players take turns filling a pair of
empty orthogonally adjacent spaces in a grid. See Figure 13.8.

Figure 13.8: Example of a 3 × 5 cram position that is equivalent to the arc kayles and cm positions
in Figure 13.4.

The reader may recognize cram as the impartial version of domineering. All cm posi-
tions are also associated with 2×n cram positions, except for a few with extra pendant
edges that, if realized in cram, require a board of width at least three. This associ-
ation is produced by first considering the associated arc kayles position. See Fig-
ure 13.8 for a cram position, which is equivalent to the arc kayles position pictured
in Figure 13.4. Note that this position cannot be realized by a 2 × n cram position for
any n.
With a cm position (B1 , B2 ) with B1 = (a1 , . . . , an ) and B2 = (b1 , . . . , bn ), we associate
an arc kayles position as above, resulting in a subgraph of a 2×(n−1) grid graph with
up to four pendant edges at the corners. A 2 × n grid graph in arc kayles is equivalent
to a 2 × n grid in cram, as the removal of a vertical (horizontal) edge and its neighbors
relates to adding a vertical (horizontal) game piece to the cram board. A single vertex
missing from this position is equivalent to a blocked square in the associated grid, and
the pendant edges are associated with extra spaces, each of which shares an edge with
one of the four corner spaces, without sharing edges with any other spaces.
Most of the remaining cm positions are equivalent to 2×n positions in cram which,
while remaining unsolved, have been addressed in the literature [4]. It is worth noting
that all cm positions in which no crossover move is possible are simply represented
by a disjunctive sum of paths in arc kayles, whose values are known [15].

5 Conclusion and further research


We have seen a possible application of genetic programming to the determination of
Grundy values of impartial combinatorial games. In addition, we have seen that it
both provides an exact function and simply informs our own mathematical analysis.
Note that the game for which it proved most useful, ga2, could likely have been solved
without using genetic programming, but simply by examining the computed Grundy
Genetically modified games | 243

values. But we have also seen that it was solved through the use of genetic program-
ming, and therefore this method could prove useful in the future. At the very least, it
could be used to reduce the time and effort taken to conjecture formulas for Grundy
values.
We are curious whether or not genetic programming can be used for problems
within CGT that a mathematician simply examining a list of values is unlikely to solve.
To answer this, we suggest more efforts into this practice. It will be very useful, for
example, to compile a database of impartial combinatorial games with known and
as yet unknown solutions. This could help inform the choice of default functions to
include in future genetic programming attempts.
There are modifications that we suggest be made to future GP for CGT projects.
Firstly, it would be beneficial to develop a more robust fitness function. As there is
no obvious metric over the set of nimbers outside of the nim-sum, an analytical ap-
proach to metrics over impartial games would be helpful. Secondly, the method for
fitness employed in [16] does not use precomputed data points at all. Instead, the au-
thor determines the fitness of a program by comparing the computed outcome classes
of a set of positions with those of its options and relating the fitness to the number
of deviations from the basic tenets of impartial games that are found among these
computations. Something similar could be used for Grundy value programming, in-
volving the mex (minimum excludant) function. However, the distance between the
actual and computed values remains a possible stumbling block.

Bibliography
[1] M. H. Albert, R. J. Nowakowski, and D. Wolfe, Lessons in Play: An Introduction to Combinatorial
Game Theory, A K Peters, Ltd., Wellesley, MA, 2007.
[2] E. R. Berlekamp, J. H. Conway, and R. K. Guy, Winning Ways for Your Mathematical Plays,
Volume 1, second edition, A K Peters, Ltd., Natick, MA, 2001.
[3] E. R. Berlekamp, J. H. Conway, and R. K. Guy, Winning Ways for Your Mathematical Plays,
Volume 2, second edition, A K Peters, Ltd., Natick, MA, 2003.
[4] E. R. Berlekamp, J. H. Conway, and R. K. Guy, Winning Ways for Your Mathematical Plays,
Volume 3, second edition, A K Peters, Ltd., Natick, MA, 2003.
[5] E. R. Berlekamp, J. H. Conway, and R. K. Guy, Winning Ways for Your Mathematical Plays,
Volume 4, second edition, A K Peters, Ltd., Wellesley, MA, 2004.
[6] A. Bradford, J. K. Day, L. Hutchinson, B. Kaperick, C. E. Larson, M. Mills, D. Muncy, and
N. Van Cleemput, Automated conjecturing II: chomp and reasoned game play, J. Artificial
Intelligence Res. 68 (2020), 447–461.
[7] R. M. Brady, Optimization strategies gleaned from biological evolution, Nature 317 (1985),
804–806.
[8] J. H. Conway, On Numbers and Games, second edition, A K Peters, Ltd., Natick, MA, 2001.
[9] H. E. Dudeney, The Canterbury Puzzles (and Other Curious Problems), EP Dutton, New York,
1908.
[10] P. Galinier and J.-K. Hao, Hybrid evolutionary algorithms for graph coloring, J. Comb. Optim. 3
(1999), 379–397.
244 | M. A. Huggan and C. Tennenhouse

[11] P. M. Grundy, Mathematics and games, Eureka 2 (1939), 6–8.


[12] R. K. Guy and C. A. B. Smith, The G-values of various games, Math. Proc. Cambridge Philos. Soc.
52(3), (1956), 514–526.
[13] A. Hauptman and M. Sipper, Analyzing the intelligence of a genetically programmed chess
player, In Late Breaking Papers at the Genetic and Evolutionary Computation Conference 2005,
Washington DC, June 2005.
[14] G. Hornby, A. Globus, D. Linden, and J. Lohn, Automated antenna design with evolutionary
algorithms, In Space 2006, 2006.
[15] M. Huggan and B. Stevens, Polynomial time graph families for Arc Kayles, Integers 16 (2016),
#A86.
[16] M. Oltean, Evolving winning strategies for Nim-like games, In IFIP Student Forum (2004),
353–364.
[17] T. J. Schaefer, On the complexity of some two-person perfect-information games, J. Comput.
System Sci. 16 (1978), 185–225.
[18] M. Sipper, Y. Azaria, A. Hauptman, and Y. Shichel, Designing an evolutionary strategizing
machine for game playing and beyond, IEEE Transactions on Systems, Man, and Cybernetics,
Part C 37 (4), (2007), 583–593.
[19] R. Sprague, Über mathematische Kampfspiele, Tohoku Mathematical Journal, First Series 41
(1935), 438–444.
[20] T. Stephens, Gplearn Model, Genetic Programming, Copyright, 2015.
Douglas E. Iannucci and Urban Larsson
Game values of arithmetic functions
Abstract: Arithmetic functions in number theory meet the Sprague–Grundy function
from combinatorial game theory. We study a variety of two-player games induced by
standard arithmetic functions, such as Euclidian division, divisors, remainders and
relatively prime numbers, and their negations.

1 Introduction
Consider the following situation: two players, Alice and Bob, alternate to partition
a given finite number of positive integers into components of the form of a nontrivial
Euclidian division. Whoever will fail to follow the rule, because each number is a “1”,
loses the game. You are only allowed to split one number at a time. For example, if
Alice starts from the number 7, then her options are 1 + ⋅ ⋅ ⋅ + 1, 2 + 2 + 2 + 1, 3 + 3 + 1, 4 + 3,
5+2, and 6+1; here the “+” sign from arithmetic functions becomes the disjunctive sum
operator (a convenient game component separator) in the game setting. By observing
that we may remove any pair of the same numbers (by mimicking strategy) and we
may remove a one unless the option is the terminal position (since its set of options is
empty), the set of options from 7 simplifies to {1, 2, 4+3, 5+2, 6}. Suppose now that Alice
starts playing from the disjunctive sum 7 + 2. By the above analysis it is easy to find a
winning move to 2+2+2+1+2. What if she instead starts from the composite game 7+3?
We study two-player normal-play games defined with the nonnegative or positive
integers as the set of positions. The two players alternate turns, and if a player has no
move option, then he/she loses. At each stage of play, the move options are the same
independently of who starts. In combinatorial game theory, this notion is referred to as
impartial. Games terminate in a finite number of moves, and there is a finite number of
options from each game position, i. e., games are short. These game axioms will allow
us to use the famous theory discovered independently by Sprague [9] and Grundy [4],
which generalizes normal-play nim, analyzed by Bouton [2], into disjunctive sum play
of any finite set of impartial normal-play games. Let opt(G) denote the set of options

Acknowledgement: Urban Larsson was partially supported by the Killam Trusts.


This work started when the second author visited the first author at the University of the Virgin Islands
in April 2015. It breaks my heart to acknowledge that the first author passed away 15 October 2020.
Doug is deeply missed. Many thanks to the referee, whose comments helped to improve the readability
of this paper.

Douglas E. Iannucci, University of the Virgin Islands, St. Thomas, Virgin Islands, US
Urban Larsson, National University of Singapore, Singapore, Singapore, e-mail:
urban031@gmail.com

https://doi.org/10.1515/9783110755411-014
246 | D. E. Iannucci and U. Larsson

of the impartial game G. If G = {g ′ ∈ opt(G)} and H = {h′ ∈ opt(H)} are impartial


games, then the disjunctive sum of G and H is the game G + H = {G + h′ , g ′ + H : h′ ∈
opt(H), g ′ ∈ opt(G)}.
Arithmetic functions are at the core of number theory, in a similar sense that nim
and Sprague–Grundy are central to the theory of combinatorial games. We consider
arithmetic functions [5] of the form f : X → Y, where the set X is either the nonnega-
tive integers ℕ0 = {0, 1, . . .} or the positive integers ℕ = {1, 2, . . .}, and where typically
Y = 2X is the set of all subsets of the nonnegative or positive integers, respectively. In
some settings, for example, when the arithmetic function is counting the instances of
another arithmetic function, we may take Y = ℕ0 ; in these cases, we will refer to f as
a counting function and to the games as counting games.
Arithmetic functions may conveniently be interpreted as rulesets of impartial
games, and here we present old and novel games within a classification scheme. Each
arithmetic function induces a couple of rulesets, and we let opt : X → Z define the
set of move options from n ∈ X, given some arithmetic function f , with sometimes an
imposed terminal sink, or a modified codomain, for example, in the powerset games to
X
come, Z = 22 ; in the singleton and counting game cases, we may take Z = Y (modulo
sometimes adjustment for 0).
More specifically, we interpret the arithmetic functions in terms of heap games;
for each game position, i. e., heap size represented by a number (of pebbles), there
are two main variations.
– A player can move-to a number or a disjunctive sum of numbers induced by the
arithmetic function.
– A player can subtract a number or a disjunctive sum of numbers induced by the
arithmetic function.

We will use the name of the arithmetic function and prepend the letters M or S, respec-
tively, for the move-to and subtract versions of a particular game. The following two
examples are excerpts from Section 2.1.

Example 1. If the players may move-to a divisor, then we get for example: from 6 the
options are 1, 2, and 3. From 7 the only option is 1. Here the divisors must be proper
divisors.

Example 2. If the players may subtract a divisor, we get for example: from 6 the op-
tions are 5, 4, 3, and 0, and from 7 the options are 0 and 6.

Before this paper, instances where number theory connects with impartial games
might individually have seemed like “lucky cases”. However, we feel that the relatively
large number of such examples justifies a more systematic study.1

1 See classical works such as Winning Ways [1] for results on impartial games coinciding with number
theory.
Game values of arithmetic functions | 247

Let us list some game rules induced by some standard arithmetic functions. When
there is only one single option, we may omit the set brackets.
1. The aliquot (divisor) games:
(a) maliquot. Move-to a proper divisor, i. e.,

opt(n) = {d : d | n, 0 < d < n}.

(b) saliquot. Subtract a divisor, i. e.,

opt(n) = {n − d : d | n, d > 0}.

2. The aliquant (nondivisor) games:


(a) maliquant. Move-to a nondivisor, i. e.,

opt(n) = {k : 1 ≤ k ≤ n, k ∤ n}.

(b) saliquant. Subtract a nondivisor, i. e.,

opt(n) = {n − k : 1 ≤ k ≤ n, k ∤ n}.

3. The τ-games:2
(a) mtau. Move-to the number of proper divisors, i. e., opt(n) = τ′ (n).
(b) stau. Subtract the number of divisors, i. e., opt(n) = n − τ(n).
4. The totative (relative prime residue) and the nontotative games:3
(a) totative. Move-to any relatively prime residue, i. e.,

opt(n) = {k : 1 ≤ k ≤ n, (k, n) = 1}.

(b) nontotative. Move-to any nonrelatively prime residue, i. e.,

opt(n) = {k : 1 ≤ k < n, (k, n) > 1}.

5. The totient (ϕ) games:


(a) totient. Move-to the number of relatively prime residues, i. e.,

opt(n) = ϕ(n).

2 Here τ(n) counts the natural divisors of n: τ(n) = ∑d|n 1. The divisor function τ is multiplicative with
τ(pa ) = a + 1 for all primes p and natural numbers a. In one of our game settings, we use instead the
number of proper divisors, and so let τ′ = τ − 1, so that, in particular, τ′ (1) = 0 and τ′ (2) = 1 (here we
lose multiplicativity).
3 The move-to and subtract variations are the same, because (k, n) = 1 if and only if (n − k, n) = 1.
248 | D. E. Iannucci and U. Larsson

(b) nontotient. Instead, subtract this number, i. e.,

opt(n) = n − ϕ(n).

6. dividing. Divide the given number into at least two equal parts, i. e.,

opt(n) = {k⏟⏟⏟⏟⏟⏟⏟⏟⏟⏟⏟⏟
+k+ ⋅ ⋅ + k⏟⏟ : km = n, m > 1}.
⏟⏟⏟⋅⏟⏟⏟⏟⏟⏟⏟⏟⏟⏟
m k’s

7. dividing-and-remainder. Divide the given number into equal parts and a re-
mainder, which is smaller than the other parts and possibly 0, i. e.,

+ k + ⋅ ⋅ ⋅ + k + r : km + r = n, m > 0, 0 ≤ r < k}.


opt(n) = {k⏟⏟⏟⏟⏟⏟⏟⏟⏟⏟⏟⏟⏟⏟⏟⏟⏟⏟⏟⏟⏟⏟⏟⏟⏟⏟⏟⏟⏟⏟⏟⏟⏟
m k’s

This game has three simpler variations, as defined in Section 4.2.


8. factoring. Factor the given number into at least two components and at most the
number of prime factors, counting multiplicity, i. e.,

opt(n) = {a1 + a2 + ⋅ ⋅ ⋅ + ak : 1 < a1 ≤ a2 ≤ ⋅ ⋅ ⋅ ≤ ak , a1 a2 ⋅ ⋅ ⋅ ak = n}.

Item 7 here is the game in the first paragraph of the paper.


The goal of this paper is to evaluate the nim-values, also known as Sprague–
Grundy values, of these games, and hence winning strategies can be computed, via
the nim-sum operator, in disjunctive sum with any other normal-play game. The nim-
values of a ruleset are defined via the minimal excludant function mex : 2X → ℕ0 ,
where X is the set of nonnegative (or positive) integers. Let A ⊂ ℕ0 be a strict subset
of the nonnegative integers. Then mex(A) = min{x : x ∈ ℕ0 \ A}, and the nim-value
is 𝒮𝒢 (n) = mex{𝒮𝒢 (x) : x ∈ opt(n)}. Note that if there is no move option from n,
then 𝒮𝒢 (n) = 0. Recall that the nim-sum is used to compute the nim-value of a dis-
junctive sum of games, i. e., 𝒮𝒢 (∑ ni ) = ⨁ 𝒮𝒢 (ni ), where ∑ is the disjunctive sum
operator, and ⨁ is the sum modulo 2 without carrying the numbers ni in their binary
representations.
If f is a counting function, then there is exactly one option. The game on a single
heap reduces to the trivial she-loves-me-she-loves-me-not game, and in particular, the
𝒮𝒢 -function reduces to a binary output, that is, 𝒮𝒢 (n) ∈ {0, 1}. Hence such rulesets will
be referred to as binary rulesets. The same game, however, played on several heaps,
with at least one nonbinary ruleset, can be highly nontrivial and results in great com-
plexity. The question of which heap to move on does not have a polynomial-time solu-
tion in general, whereas many arithmetic functions are known to be intractable. Our
inspiration for studying binary counting games came from Harold Shapiro’s classifi-
cation for the recurrence of the totient function [7]. A couple of examples will clarify
this type of issues.
Game values of arithmetic functions | 249

Example 3. Let 0 be the empty heap. Suppose that from a heap of size n > 0, the
players can remove the number of divisors of n. The option of n = 1 is 0. A heap of
size 2 also has 0 as an option, but 1 is the option of 3. The nim-sequence thus starts:
0, 1, 1, 0, 0, 1. The heap of size 5 has 3 as the option, for which the nim-value is 0. On
one heap, whereas play is trivial, the problem of determining the winner is as hard as
the complexity of the sequence.

Example 4. Let 0 be the empty heap. Suppose that from a heap of size n > 0 the players
can move-to the number of proper divisors of n. The option of n = 1 is 0. The heaps of
size two and three have moves to the heap with a single pebble. The number of proper
divisors of n = 4 is 2, and hence the option is 2. As for all primes, the option of n = 5 is
the heap of size one. Thus the nim-sequence starts: 0, 1, 0, 0, 1, 0.

Example 5. Consider binary games. Of course, even playing a disjunctive sum of bi-
nary games gives only binary values. Consider, for example, totient, where 𝒮𝒢 (2+3+
4+5) = 𝒮𝒢 (2)⊕𝒮𝒢 (3)⊕𝒮𝒢 (4)⊕𝒮𝒢 (5) = 1⊕0⊕0⊕1 = 0. Hence 2+3+4+5 is a second player
winning position. To see this in play, suppose that the first player selects the heap of
size 4 and moves to 2 + 3 + ϕ(4) + 5 = 2 + 3 + 2 + 5. Now 𝒮𝒢 (2 + 3 + 2 + 5) = 1 ⊕ 0 ⊕ 1 ⊕ 1 = 1,
which is a winning position for the player to move, and indeed, since every move
changes the parity, we have automatic, “random” optimal play even if we play a sum
of games, provided that they are all binary. In particular, if we play a disjunctive sum of
totient games, then the optimal strategy is to play any move. Hence these games seem
less interesting in that respect as 2-player games, but suppose that we instead play a
disjunctive sum of the totient game G with the totative game H. Now an efficient algo-
rithm for computing the binary nim-value (see Theorem 8) is interesting again. What
is a sufficient move in the first player winning position 7totient + 7totative ? (There are
exactly three winning moves.)

Those examples motivate play on arithmetic counting functions. Other examples


of binary games are the fullset games, where each move is defined by playing to a
disjunctive sum of all numbers induced by the arithmetic function.
For the powerset rulesets, the range of the opt function is the set of all subsets
of natural numbers; a generic game on a single heap decomposes to play on several
heaps. Hence the full 𝒮𝒢 -function is intrinsically motivated in the solution of a single
game, even if the starting position is a single heap.

Example 6. Consider the game played from a heap of size n, where the options are to
play to any nonempty set of proper divisors of n. If n = 6, then the options are the
single heaps of size 1, 2, 3, respectively, the pairs of heaps 1 + 2, 1 + 3, and 2 + 3, and the
triple 1 + 2 + 3. A heap of size one has no option, and a heap of size two or three has a
heap of size one as option. Hence 𝒮𝒢 (6) = 2. The nim-value of each prime is one, and
so on.
250 | D. E. Iannucci and U. Larsson

Example 7. Consider the game played from a heap of size n, where the options are
to play to any finite set of relatively prime residues smaller than n. If n = 5, then the
options are all nonempty subsets of {1, 2, 3, 4}. In spite of the relatively large number of
options, in this particular case, the 𝒮𝒢 -computation becomes easy. A heap of size one
has no option, and so 𝒮𝒢 (1) = 0. Therefore 𝒮𝒢 (2) = 1, and so 𝒮𝒢 (3) = 2. A heap of size
4 has 1, 3, and 1 + 3 as options, and so 𝒮𝒢 (4) = 1. By this, obviously, 𝒮𝒢 (5) = 4. A heap
of size 6 has few options, and easily 𝒮𝒢 (6) = 1. A heap of size 7 has many options, and
likewise easily 𝒮𝒢 (7) = 8, the smallest unused power of two. This game is revisited in
Theorem 17.

In view of the above examples, we use the following classification of games on


arithmetic functions; an arithmetic game satisfies one of these items.
(i) Play singletons from the arithmetic property.
(ii) Play the number of elements from the arithmetic property.
(iii) Play the disjunctive sum of all numbers from the arithmetic property.
(iv) Play any nonempty subset of numbers from the arithmetic property as a disjunc-
tive sum.

The word “Play . . . ” (read: “Play is defined by . . .”) is intentionally left open for inter-
pretation. Here it will have one out of two meanings; either the players move-to the
numbers, or they subtract the numbers from the given heap (size). Items (iii) and (iv)
typically split a heap into several heaps to be played in a disjunctive sum of heaps.
Note that (iii) is binary, although it does not concern counting functions. The rulesets
induced by (iii) and (iv) are not listed above, but naturally build on items 1, 2, and 4.
We define them in their respective sections.
Some arithmetic functions directly induce a disjunctive sum of games, such as the
division algorithm or the factoring problem. For the ruleset on Euclidian division from
the first paragraph (Section 4.1), we conjecture that the relative nim-values 𝒮𝒢 (n)/n
tend to 0 with increasing heap sizes.
In Section 2, we study singleton games. In Section 3, we study counting games.
In Section 4, we study dividing games, where division induces a disjunctive sum of
games, and similarly for Section 5 with factoring games. In Section 6, we study dis-
junctive sum games on the full set induced by the arithmetic function. In Section 7, we
study powerset disjunctive sum games. Section 8 is devoted to some future directions.
For reference, let us include a table of studied rulesets in the order of appearance,
including some significant properties. The abbreviations are m-t: move-to, subtr.: sub-
traction, div.: divisor, rel.: relative, n.: number, pr.: problem, disj.: disjunctive. The so-
lution functions are defined in the respective sections, but let us list them here as well.
In particular, we encounter indexing functions, where numbers with certain property
are enumerated, starting with 1 for their smallest member, etc. In the table, we find
the following functions: Ω, number of prime divisors counted with multiplicity; ω,
the number of prime divisors counted without multiplicity; Ω2 , the number of prime
Game values of arithmetic functions | 251

divisors counted with multiplicity, unless the divisor is 2, which is counted without
multiplicity; v, usual 2-valuation; io , index of largest odd divisor; ip , index of smallest
prime divisor.

Ruleset description arithmetic f. solution f. Sec.

maliquot m-t div. aliquot Ω 2.1.1


saliquot subtr. div. aliquot v 2.1.2
maliquant m-t nondiv. aliquant io 2.2.1
saliquant subtr. nondiv. aliquant partial 2.2.2
totative m-t rel. prime totative ip 2.3
nontotative m-t nonrel. prime totative partial 2.4
totient m-t n. rel. prime totient Shapiro 3.1.1
nontotient m-t num. nonrel. prime totient 3.1.2
mtau m-t n. div. τ observation 3.2.1
stau subtr. n. div. τ 3.2.2
mΩ m-t n. prime div. Ω observation 3.3.1
sΩ subtr. n. prime div. Ω 3.3.2
mω m-t n. dist. prime div. ω observation 3.3.3
sω subtr. n. dist. prime div. ω 3.3.4
dividing m-t disj. sum div. aliquot Ω2 4.1
div.-and-res. m-t disj. sum Eucl. div. Eucl. div. 4.2
compl.-grundy m-t disj. sum Eucl. div. Eucl. div. 4.2
div.-throw-res. m-t disj. sum Eucl. div. Eucl. div. io 4.2
res.-throw-div. m-t residue Eucl. div. yes 4.2
m-factoring m-t factoring factoring Ω 5
s-factoring subtr. factoring factoring 5
fs maliquot m-t disj. sum all div. aliquot square free 6
ps maliquot m-t disj. sum div. aliqout 7
ps saliquot subtr. disj. sum div. aliqout v 7
ps maliquant m-t disj. sum div. aliquant io 7
ps saliquant subtr. disj. sum div. aliquant ip 7
ps totative m-t disj. sum div. totative 7
ps nontotative m-t disj. sum div. totative 7

2 Singletons
This section concerns items 1, 2, and 4 from the introduction, the aliquots, aliquants,
and totatives.

2.1 The aliquots


The first game maliquot is “nim in disguise” (think of the prime factors of a number
as the pebbles in a heap), but since the factoring problem is hard, the game is equally
252 | D. E. Iannucci and U. Larsson

hard. Here the arithmetic function is f (n) = {d : d | n, n ∈ ℕ0 }. In this section the set
of game positions is ℕ. Since all nonnegative integers divide 0, we do not admit 0 to
the set of game positions. The second game saliquot turns out to be somewhat more
interesting.
Let n ∈ ℕ. Then Ω(n) is the number of prime factors of n, counting multiplicities,
and v = v(n) is the 2-valuation of n = 2v m, where m is odd.

2.1.1 maliquot, move-to a proper divisor

Here the set of move options from n is opt(n) = {d : d | n, d ≠ n, n ∈ ℕ}.

Example 8. From 6 the options are 1, 2, and 3. From 7 the only option is 1.

The unique terminal position is 1. It follows that 𝒮𝒢 (1) = 0, and if p is a prime,


then 𝒮𝒢 (p) = 1. We have computed the first few nim-values.

n opt(n) 𝒮𝒢(n)
1 ⌀ 0
2 1 1
3 1 1
4 1,2 2
5 1 1
6 1, 2, 3 2
7 1 1
8 1, 2, 4 3

Theorem 1. Consider maliquot. Then 𝒮𝒢 (n) = Ω(n) for all n.

Proof. We have that 0 = 𝒮𝒢 (1) = Ω(1), since there are no options from 1, and 1 has
no any prime factors. Suppose that n > 1 has k prime factors, counting multiplicities.
Then, for each x ∈ {1, . . . , k − 1}, there is a divisor of n corresponding to a move-to a
number with x prime factors. Since a player is not allowed to divide by n, the number
of prime factors decreases by moving, and so by induction there is no option of nim-
value k. By the mex-rule the result holds, 𝒮𝒢 (n) = k = Ω(n).

2.1.2 saliquot, subtract a divisor

This game is defined on the nonnegative integers ℕ0 . Here

opt(n) = {n − d : d | n, d > 0}.

Example 9. The options of 6 are 5, 4, 3, and 0. From 7 the options are 0 and 6.
Game values of arithmetic functions | 253

Since 0 is always an option from n, it is clear that the nim-value of a nonzero po-
sition is greater than 0. The initial nim-values are as follows:

n opt(n) 𝒮𝒢(n)

0 ⌀ 0
1 0 1
2 0, 1 2
3 0, 2 1
4 0, 2, 3 3
5 0, 4 1
6 0, 3, 4, 5 2
7 0, 6 1
8 0, 4, 6, 7 4

As the table indicates, the nim-values concern the 2-valuation of n.

Theorem 2. Consider saliquot. Then 𝒮𝒢 (0) = 0. Suppose that n > 0 and let n = 2k m,
where 2 ∤ m and k ≥ 0. Then 𝒮𝒢 (n) = v(n) + 1 = k + 1.

Proof. The case 0 | 0 is excluded in the definition of opt. Hence there is no move from
0 and so 𝒮𝒢 (0) = 0. Note that if n > 0, then 0 of nim-value 0 is an option.
Suppose that n is odd. Then, for all d | n, n − d is even. By induction the even num-
bers have nim-value greater than one. Hence, since 0 is an option, the mex-function
gives 0 + 1 = 1 as the nim-value of n = 20 m.
Suppose that n is even. Then n − 1, which is odd, is an option. It has nim-value 1
by induction. (Hence 𝒮𝒢 (n) ⩾ 2.) Let d = 2ℓ q ⩽ n, where we may assume that ℓ > 0,
since we are interested in the even options.
Since d is a divisor of n, we have that 0 ≤ ℓ ⩽ k and q | m with odd m > 1. We
get n − d = 2k m − 2ℓ q = 2ℓ (2k−ℓ m − q). The number x = 2k−ℓ m − q is odd, of nim-value
1 by induction, unless k = ℓ. In this case, if m = q, then the option is 0, so suppose
m > q. Since both m and q are odd, the option of n has a greater 2-valuation than n,
i. e., v(n) ≥ k + 1. Therefore no option has 2-valuation k, and hence by induction no
option has nim-value k + 1.
Since ℓ can be chosen freely in the interval 0 ≤ ℓ < k, by induction all nim-values
0 ≤ 𝒮𝒢 (n − d) ≤ k can be reached; since m > 1, we may take q < m. The result
follows.

2.2 The aliquants


The aliquant games are somewhat more intricate than the aliquots, but we still have
an explicit solution in the first variation. Here f (n) = {d : d ∤ n, n ∈ ℕ0 }.
254 | D. E. Iannucci and U. Larsson

2.2.1 maliquant: move-to a nondivisor

Since all numbers divide 0, 0 does not have any options, and hence 𝒮𝒢 (0) = 0. On
the other hand, 0 does not divide any nonzero number, and hence 0 will be an option
from each number. The options are opt(n) = {d < n : d ∤ n, n ∈ ℕ0 }.

n opt(n) 𝒮𝒢(n)

0 ⌀ 0
1 0 1
2 0 1
3 0, 2 2
4 0, 3 1
5 0, 2, 3, 4 3
6 0, 4, 5 2
7 0, 2, 3, 4, 5, 6 4
8 0, 3, 5, 6, 7 1

In maliquant the 2-valuation plays an opposite role as in saliquot; here only the odd
part of n determines the nim-value. Let io : ℕ → ℕ be the index function for largest
odd factor of a given natural number; that is, if n = 2k (2m − 1), then i(n) = m. Clearly,
io (2m − 1) = m, and io (n) ≤ (n + 1)/2.

Lemma 1. For all n ∈ ℕ, the numbers in the set {n, . . . , 2n − 1} contribute all maximal
odd factor indices in the set {1, . . . , n}, that is, {io (x) : n ≤ x ≤ 2n − 1} = {1, . . . , n}.

Proof. We use induction on n. Assuming that the statement of the lemma holds for all
natural numbers up to n, we consider the set {n+1, . . . , 2n−1, 2n, 2n+1}. As io (2n) = io (n)
and io (2n + 1) = n + 1, it follows that {io (x) : n ≤ x ≤ 2n + 1} = {1, . . . , n, n + 1}

Theorem 3. Consider maliquant. Then 𝒮𝒢 (0) = 0, and 𝒮𝒢 (n) = io (n) for all n ∈ ℕ.

Proof. We use induction on n ∈ ℕ. If n is odd, then write n = 2m − 1, whence {m −


1, . . . , 2m − 3} ⊂ opt(n). By Lemma 1 and the induction hypothesis we have {𝒮𝒢 (k) :
m − 1 ≤ k ≤ 2m − 3} = {1, . . . , m − 1}. By induction hypothesis, 𝒮𝒢 (k) = io (k) ≤ m/2 for
all k < m − 1, and hence 𝒮𝒢 (n) = m = io (n). Even numbers are options, but they have
nim-values smaller than m/2 by induction.
If n is even, then write n = 2k m with k > 0 and odd m. Thus it suffices to prove
𝒮𝒢 (n) = io (m). Note that for all positive integers l, we have io (l) = io (m) if and only
if l = 2j m for some j ≥ 0. Thus if m = 1, then opt(n) omits all those elements 2j with
index 1, and hence 𝒮𝒢 (n) = 1. Otherwise, if m > 1, then we observe that

{2k−1 m + 1, 2k−1 m + 2, . . . , 2k m − 1} ⊂ opt(n).

If we augment this set with the element 2k m, then by Lemma 1 the indices of its ele-
ments are {1, 2, . . . , 2k−1 m + 1}, which includes {1, 2 . . . , io (m) − 1} as a subset. However,
Game values of arithmetic functions | 255

opt(n) contains no elements of the form 2j m, and hence io (m) does not appear among
the elements io (x) for all x ∈ opt(n), whereas all elements of {1, . . . , io (m)−1} do appear.
Thus by induction hypothesis 𝒮𝒢 (n) = io (m).

2.2.2 saliquant: subtract a nondivisor

Here we run into some mysterious sequences. We can only prove partial results. The
options are opt(n) = {n − d : d ∤ n, n ∈ ℕ0 }.

n opt(n) 𝒮𝒢(n) n opt(n) 𝒮𝒢(n)

0 ⌀ 0 10 1, 2, 3, 4, 6, 7 2
1 ⌀ 0 11 1, 2, 3, 4, 5, 6, 7, 8, 9 5
2 ⌀ 0 12 1, 2, 3, 4, 5, 7 4
3 1 1 13 1, . . . , 11 6
4 1 1 14 1, . . . , 6, 8, 9, 10, 11 6
5 1, 2, 3 2 15 1, . . . , 9, 11, 13 7
6 1, 2 1 16 1, . . . , 7, 9, 10, 11, 13 7
7 1, 2, 3, 4, 5 3 17 1, . . . , 15 8
8 1, 2, 3, 5 3 18 1, . . . , 8, 10, 11, 13, 14 4
9 1, 2, 3, 4, 5, 7 4 19 1, . . . , 17 9

The odd heap sizes turn out to be simple. We give some more nim-values for even heap
sizes n = 0, 2, . . .:

𝒮𝒢 (n) = 0, 0, 1, 1, 3, 2, 4, 6, 7, 4, 7, 5, 10, 12, 10, 13, 15, 8, 13, 9, 17, 17, 16, 11, 22, . . . .

For even heap sizes n ≥ 2,

𝒮𝒢 (n)/n = 0, 1/4, 1/6, 3/8, 1/5, 1/3, 3/7, 7/16, 2/9, 7/20, 5/22, 5/12, 6/13, 5/14,
13/30, 15/32, 4/17, 13/36, 9/38, 17/40, 17/42, 4/11, 11/46, 11/24, . . . .

Sorting the ratios 𝒮𝒢 (n)/n by size, we find that the associated nim-values[n, 𝒮𝒢 (n)]
for the smallest ratios, [6, 1], [10, 2], [18, 4], [22, 5], [34, 8], [38, 9], [46, 11], satisfy (n −
2)/𝒮𝒢 (n) = 4. The half of each heap size in this sequence is odd, and we get the odd
numbers 3, 5, 9, 11, 17, 19, 23, . . . . We have not investigated these patterns further, but
we believe that 𝒮𝒢 (n) ≥ (n − 2)/4 for all n. Indeed, by plotting the first 1000 nim-values
in Figure 14.1 this lower bound appears to continue.
n−1
Theorem 4. Consider saliquant. Then 𝒮𝒢 (0) = 0, and if n is odd, then 𝒮𝒢 (n) = 2
.
Moreover, 𝒮𝒢 (n) < n/2.

Proof. Suppose that the statement holds for all m < n. If n = 2x + 1, then each nonneg-
ative integer smaller than x is represented as a nim-value, and specifically, for each
256 | D. E. Iannucci and U. Larsson

Figure 14.1: The initial 1000 nim-values of saliquant.

odd number 2y + 1 with y < x, 𝒮𝒢 (2y + 1) = y. Moreover, each odd number is an option
of x, since each even integer is a nondivisor of n = 2x + 1. Therefore we use that by
induction each even number smaller than n has a smaller nim-value, and we are done
with the first part of the proof.
Suppose next that n = 2x. Then since both 1 and 2 are divisors, we know that the
largest option is smaller than 2x −2. By induction the nim-value of any number smaller
than 2x − 2 is smaller than x − 2, that is, 𝒮𝒢 (2x − 3) = 𝒮𝒢 (2(x − 2) + 1) is the upper bound
for a nim-value of an option of 2x.

2.3 The totatives: move-to a relatively prime


Here f (n) = {x : (x, n) = 1}. The totative games are defined by moving to a relatively
prime residue. We list the first few nim-values of totative:

n opt(n) 𝒮𝒢(n)

1 ⌀ 0
2 1 1
3 1, 2 2
4 1, 3 1
5 1, 2, 3, 4 3
6 1, 5 1
7 1, . . . , 6 4
8 1, 3, 5, 7 1
Game values of arithmetic functions | 257

We have the following result. The solution involves the function ip , the index of the
smallest prime divisor of a given number, where the prime 2 has index 1.

Theorem 5. Consider totative. The nim-value of n > 1 is the index of the smallest prime
divisor of n, and 𝒮𝒢 (1) = 0.

Proof. There is no move from 1, because the only number relatively prime with 1 is 1,
and options have smaller size than the number. Hence, by the definition of the mex-
function, 𝒮𝒢 (1) = 0. Also, 𝒮𝒢 (2) = 1, since the only relatively prime number of 2 is 1,
which has nim-value 0, and ip (2) = 1. Suppose that the result holds for all numbers
smaller than n. From n a player can only access a smaller number with no common
divisor to n. Therefore none of its options has the same smallest prime divisor. This is
one of the properties of the mex-rule.
Thus the index of the smallest prime divisor of n will be chosen as nim-value if
each prime with a smaller index appears as an option. However, the set of relatively
prime numbers smaller than n contains in particular all the relatively prime numbers
of the smallest prime divisor of n and hence all the primes that are smaller than the
smallest prime factor of n. By induction this is the desired set of nim-values, since the
move-to 1 (of nim-value 0) is always available.

This is the sequence A055396 in Sloane [8]: “Smallest prime dividing n is a(n)th
prime (a(1) = 0).”

2.4 The nontotatives: move-to a nonrelatively prime


Here is a table of the first few nim-values of nontotative.

n opt(n) 𝒮𝒢(n) n opt(n) 𝒮𝒢(n)

0 ⌀ 0 10 0, 2, 4, 5, 6, 8 5
1 0 1 11 0 1
2 0 1 12 0, 2, 3, 4, 6, 8, 9, 10 6
3 0 1 13 0 1
4 0, 2 2 14 0, 2, 4, 6, 7, 8, 10, 12 7
5 0 1 15 0, 3, 5, 6, 9, 10, 12 4
6 0, 2, 3, 4 3 16 0, 2, 4, 6, 8, 10, 12, 14 8
7 0 1 17 0 1
8 0, 2, 4, 6 4 18 0, 2, 3, 4, 6, 8, 9, 10, 12, 14, 15, 16 9
9 0, 3, 6 2 19 0 1

This sequence does not yet appear in OEIS [8], but curiously enough, a nearby se-
quence is A078898, “Number of times the smallest prime factor of n is the smallest
prime factor for numbers ≤ n; a(0) = 0, a(1) = 1.” For n ≥ 2, a(n) tells in which column
258 | D. E. Iannucci and U. Larsson

of the sieve of Eratosthenes (see A083140, A083221) n occurs. Here 𝒮𝒢 (15) = 4 ≠ 3 =


a(15) is the first differing entry. In Figure 14.2, we plot the first 1000 nim-values.

Figure 14.2: The initial 1000 nim-values of nontotative.

Let us sketch a few nim-value subsequences. The primes have nim-value 1, and the
prime squares have nim-value 2. The numbers with close prime factors, “almost
squares”, appear to have almost constant nim-values. On the other hand, some num-
bers in arithmetic progressions appear to have nim-values in almost arithmetic pro-
gressions; for example, 𝒮𝒢 (2n) = n for all n. We give the exact statements in Theorem 6.
For subsequences of the natural numbers s, let the asymptotic relative nim-value
be
𝒮𝒢 (s(n))
rs = lim
n s(n)

if it exists. The subsequences of largest relative nim-values, apart from s0 = 2, 4, . . .


(with 𝒮𝒢 (2n) = n), are s1 = 3, 9, . . . and s2 = 5, 25, 35, 55, 65, . . . with corresponding
first differences Δ1 = (6, 6, . . .) and Δ2 = (20, 10, 20, 10, . . .) and nim-values in {⌊(n +
1)/4⌋} and {⌊n/10⌋, ⌈n/10⌉}, respectively. Thus r0 = 1/2, r1 = 1/4, and r2 = 1/10, where
rsi = ri . The region between “prime factorization” and “purely arithmetic behavior”
is still mysterious. We can identify at least one more sequence of arithmetic behavior
with r3 ≈ 1/17, but the descriptions start to get quite technical here.

Theorem 6. Consider nontotative. For n ∈ ℕ,


(i) 𝒮𝒢 (n) = 1 if and only if n is prime;
(ii) 𝒮𝒢 (n) = 2 if and only if n is a prime square;
(iii) 𝒮𝒢 (n) ∈ {3, 4} if and only if n = pi pi+1 or n = 8: 𝒮𝒢 (n) = 3 if and only if i is odd
with p1 = 2;
(iv) 𝒮𝒢 (n) ∈ {5, 6} if and only if n = pi pi+2 or n = 12: 𝒮𝒢 (n) = 5 if and only if i ≡ 1, 2
(mod 4).
Game values of arithmetic functions | 259

Moreover, for n ∈ ℕ,
(v) 𝒮𝒢 (2n) = n;
(vi) if n ≡ 3 (mod 6), then 𝒮𝒢 (n) = ⌊(n + 1)/4⌋;
(vii) if n ≡ 5, 25 (mod 30), then 𝒮𝒢 (n) ∈ {⌊n/10⌋, ⌈n/10⌉}.

Lastly, for n ∈ ℕ,
(viii) 𝒮𝒢 (n) ≤ n/2;
(ix) if n is odd, then 𝒮𝒢 (n) ≤ (n + 1)/4.

Proof. Note that 𝒮𝒢 (0) = 0 implies 𝒮𝒢 (n) > 0 if n > 0. The induction hypothesis
assumes all the items. Item (v) takes care of the case of even n, so in all other items,
we may assume that the smallest prime dividing n is greater than 2. Note that (ix) and
(v) imply (viii).
For (i), the only nonrelatively prime number of a prime is 0. Hence 𝒮𝒢 (p) = 1
if p is prime. If n = pm is not a prime, then there is a move to the prime divisor p, a
nonrelative prime, and there is a move to 0. Hence the nim-value is greater than one.
For (ii), we consider prime squares p2 and note that each option is of the form np,
0 ≤ n ≤ p − 1. In particular, there are moves p2 󳨃→ 0 and p2 󳨃→ p, as noted in the first
paragraph. Moreover, by induction we assume that 𝒮𝒢 (m) = 2 if and only if 1 < m < p2
is a prime square. Then m ≠ np, and so by the minimal exclusive algorithm, 𝒮𝒢 (p2 ) = 2.
For the other direction, we are done with the cases 0, 1, and primes. Consider the com-
posite n = pm, not a prime square, where p is the smallest prime factor. Then there is
a move to p2 , and hence 𝒮𝒢 (n) ≠ 2.
For (iii), we begin by proving that 𝒮𝒢 (n) ∈ {3, 4} if n = pi pi+1 or n = 8, where the
nim-value is three if and only if the index of the smaller prime is odd. The base case
is 𝒮𝒢 (2 ⋅ 3) = 3, where the exception n = 8 is by inspection. For the generic case, each
option is of the form npi , 0 ≤ n ≤ pi+1 − 1, or npi+1 , 0 ≤ n ≤ pi − 1. In particular, there is a
move to pi (and to pi+1 ) of nim-value one, and there is a move to p2i of nim-value 2. We
must show that there is no move to another prime pair of the same form, i. e., pj pj+1
with j of the same parity as i. Observe that there is a move to pi−1 pi with j + 1 = i,
but there is no move to any other almost square pj pj+1 . By induction this observation
suffices to find a move to nim-value 3 if i is even. For the other direction, we must show
that 𝒮𝒢 (n) ∈ ̸ {3, 4} if n is not an almost square. We are done with the cases where n is
a prime or a prime square. Suppose that n = px is not of the mentioned form, where
p is the smallest prime factor of n. The case p = 2 is dealt with in item (v), so let us
assume that p > 2. Then there is a move to pq (nonrelative prime with n), where q is
the smallest prime larger than p, because by assumption, x > q, and there is a move
to pq, where q is the largest prime smaller than p.
For (iv), we study the case n = pi pi+2 . If i = 1, then n = 10, and 𝒮𝒢 (10) = 5.
If i = 2, then n = 21, and, by inspection, 𝒮𝒢 (21) = 5. For the general case, among the
options, we find 0, pi , p2i , pi pi+1 , pi+1 pi+2 . Hence by the previous paragraphs the options
attain all nim-values smaller than 5. Next, suppose that i ≡ 1, 2 (mod 4), and we must
260 | D. E. Iannucci and U. Larsson

show that there is no option of the same form to create nim-value 5. Each option is
a multiple of one of the primes pi and pi+2 . The only possibility would be the option
pi−2 pi . However, i − 2 ≡ 0, 3 (mod 4). Hence no option has nim-value 5. The analogous
argument suffices to show that no option has nim-value 6 if i ≡ 0, 3 (mod 4), i ≥ 3.
The particular case n = 12 = 2 × 2 × 3 is not an option since i ≥ 3. On the other hand, the
argument shows that there is an option to nim-value 5. Consider the other direction.
Suppose that n = pi x, where pi > 2 is the smallest prime in the factorization of n. (The
case p = 2 is dealt with below.) If x > pi+2 , then there is a move to pi pi+2 , and if i ≥ 3,
then there is a move to pi−2 pi . If i = 2, then there is a move to 12 of nim-value 6. That
concludes this case. If pi < x < pi+2 , then x = pi+1 , since pi is the smallest prime in the
decomposition of n, and we are done with this case.
For (v), we verify that 𝒮𝒢 (2n) = n for all n. The options are of the form 2j with
0 ≤ j ≤ n, and so induction on (v) gives that each nim-value smaller than n can be
reached. Moreover, induction on (viii) gives that nim-value n does not appear among
the options.
For (vi), we must prove that if n ≡ 3 (mod 6), then 𝒮𝒢 (n) = ⌊(n+1)/4⌋. The claimed
nim-value sequence for the positions 3, 9, 15, 21, 27, . . . is ⌊(3 + 1)/4⌋, ⌊(9 + 1)/4⌋, ⌊(15 +
1)/4⌋, . . . , which is 1, 2, 4, 5, 7, 8, . . . . Clearly, n has each smaller position of the same
form as an option. Precisely, the multiples of 3 are missing in the nim-value sequence.
However, by induction on (v) the multiples of 6 have nim-values multiples of 3, and
indeed, all multiples of 6 smaller than n are options of n. By induction on item (ix),
since n is odd, the nim-value ⌊(n + 1)/4⌋ does not appear among its options. The proof
of (vii) is similar to (vi) but more technical, so we omit it.
Item (viii) follows directly by induction (for example, if n is even, then n − 2 is the
largest option, and 𝒮𝒢 (n − 2) ≤ n/2 − 1).
For item (ix), assume that p > 2 is the smallest prime divisor of n. The cases with
p ≤ 5 have already been proved in items (vi) and (vii). Hence p > 5. It follows that
n
with t = ⌊ 2p ⌋, tp + 3 < n/2 < (t + 1)p − 3. It follows that the nim-value ⌊ n+1 4
⌋ cannot be
reached from n by options of the form in (v). On the other hand, it cannot be reached
by moving to an odd number, since by induction options n − 2p or smaller produce too
small nim-values.

We do not yet know if all nim-values can be obtained by analogous reasoning.

3 Counting games
This section concerns rulesets as in item (i) in the introduction. Binary games have
only one option per heap, so the decision problem reduces to which heap to move in.
The nim-value of a binary game is binary, that is, each nim-value ∈ {0, 1}; the nim-value
of a given disjunctive sum of binary games is 0 if and only if the number of heaps of
nim-value 1 is even. Of course, the nim-value sequence for any given ruleset is valid in
the much larger context of all normal-play combinatorial games.
Game values of arithmetic functions | 261

We begin in Section 3.1.1 by solving totient, and then we sketch a classification


scheme for nontotient. Then we list nim-values of other open counting games.

3.1 Harold Shapiro and the totient games


Recall that Euler’s totient (or ϕ) function counts the number of relatively prime
residues of a given number. For example, ϕ(7) = 6 and ϕ(6) = 2. Recall that this
function is multiplicative. This is where we can apply a known result by Harold
Shapiro “An arithmetic function arising from the ϕ function” [7]. The fundamental
theorem for iteration of the Euler ϕ function in his work is as follows. For all x, let
ϕi (n) = ϕi−1 (ϕ(n)). Since ϕi (n) < ϕi−1 (n) and ϕ(n) is even, if n > 2, then we have that
for all n and some unique i > 0 (depending on n only),

ϕi (n) = 2. (14.1)

This lets us define the class of n as C(n) = i when (14.1) holds, and otherwise C(1) =
C(2) = 0.

Theorem 7 ([7]). Let m, n ∈ ℕ. If n is odd, then C(n) = C(2n), and otherwise C(n) + 1 =
C(2n). In general, if either m or n is odd, then C(mn) = C(m) + C(n). Otherwise, that is, if
both m and n are even, then C(mn) = C(m) + C(n) + 1.

For example, ϕ(7)2 = ϕ(6) = 2, so C(7) = 2. In general, for primes p, ϕ(p)2 =


ϕ(p − 1), so C(p) = C(p − 1) + 1. Here is an example: C(15) = C(3) + C(5) = 1 + C(4) + 1 =
2 + C(2) + C(2) + 1 = 3; note that ϕ(15) = ϕ(3)ϕ(5) = 2 ⋅ 4 = 8. Moreover, ϕ(8) = 4 and
ϕ(4) = 2, so indeed ϕ3 (15) = 2.
This result lets us compute the nim-values of the first totient ruleset totient sim-
ply by recalling the ϕ-values for the primes.

3.1.1 totient, the move-to variation of the totient game

This is a binary game. Indeed, there is only one option from n, namely ϕ(n), the num-
ber of relatively prime residues of n.

n opt(n) 𝒮𝒢(n)

1 ⌀ 0
2 1 1
3 2 0
4 2 0
5 4 1
6 2 0
7 6 1
8 4 1
262 | D. E. Iannucci and U. Larsson

Here are a few more nim-values in the nim-sequence:

01001, 01111, 01010, 01100, 00101, 0001,

where the commas are for readability. Each game component has a forced move, which
if played alone may be regarded as an automaton. Starting from 8, for example, the
iteration of ϕ gives the sequence of moves 8 󳨃→ 4 󳨃→ 2 󳨃→ 1, and the nim-sequence, of
course, is alternating between 0s and 1s, terminating with the 0 at position 1. If played
on a disjunctive sum of totient, the nim-value sequence is of course also binary, al-
ternating between 0s and 1s, and it is 0 if and only if there is an even number of heaps
of nim-value 1. Suppose that we play 7t + 7s , where the first 7 is totient, and the sec-
ond 7 is subtraction{1, 2}, with nim-value sequence 0, 1, 2, 0, 1, 2, 0, . . . , say with sink
number 1 on both components. Then there are exactly two winning moves to 6t + 7s or
7t + 5s . Intelligent play from a general position mt + ns requires full understanding of
totient.

Theorem 8. Consider totient, and let C be as in Theorem 7. Then 𝒮𝒢 (1) = 0, and for
n > 1, 𝒮𝒢 (n) = C(n) + 1 (mod 2).

Proof. Use Theorem 7.

In general, it suffices to compute the parity of C(n) and, given the factorization
of n, apply Theorem 7. For example, C(23 ) = C(2) + C(22 ) + 1 = 3C(2) + 2 = 2, and
without looking into the table, we get 𝒮𝒢 (8) = 1. For another example, if n = 2 ⋅ 37 ⋅ 11,
then C(n) = 7C(3) + C(11) = 7 + 3 = 10, since ϕ(3) = 2 and ϕ3 (11) = 2. Therefore
𝒮𝒢 (48114) = (C(48114) + 1) (mod 2) = 11 (mod 2) = 1, and we find a unique winning
move 48114t + 3s 󳨃→ 48114t + 2s , where s is still subtraction{1, 2}.

3.1.2 nontotient, the subtraction variation of the totient game

From a given number n, subtract the number of relatively prime numbers smaller
than n. We cannot adapt Theorem 8 because it relies on iterations where a player in-
stead moves-to this number, and the authors have not yet found a similarly efficient
tool. Let us list the initial options and nim-values. An alternative way to think of the
options is to move-to the number of nonrelative primes, including the number. The
nim-values alternate for heaps that are powers of primes starting with 𝒮𝒢 (p0 ) = 0.
This happens because the number of nonrelatively prime numbers smaller than or
equal to a prime power pk is pk−1 . Hence 𝒮𝒢 (pk ) = 0 if and only if k is even. Since ϕ
is multiplicative, it is easy to compute f (n) = n − ϕ(n) for any n or get a formula for f
for any given prime decomposition of n. However, f is not multiplicative, which limits
the applicability of such formulas. In some particular cases, we can use the proximity
to powers of primes for fast computation of the nim-value. Take the case of n = pk q
Game values of arithmetic functions | 263

for some distinct primes p and q. Then f (n) = n − ϕ(n) = pk−1 (q + p − 1). Whenever
q + p − 1 is a power of the prime p, the nim-value of n is immediate by the parity of
the new exponent. Take, for example, p = 2 and q = 7. Then p + q − 1 = 23 , and so we
can find the nim-value of, for example, n = 7168 = 7 ⋅ 210 , which gives the exponent
9 + 3 = 12, and so 𝒮𝒢 (7168) = 0. Similarly, with p = 3 and q = 7, we can easily compute
𝒮𝒢 (413343) = 1, because p + q − 1 = 32 and 413343 = 7 ⋅ 310 .
Let “dist” denote the number of iterations of f to an even power of a prime. We get
the following suggestive table of the first few nim-values. We leave a further classifi-
cation of dist as an open problem.

n opt(n) dist 𝒮𝒢(n)

1 ⌀ 0 0
2 1 1 1
3 1 1 1
4 2 0 0
5 1 1 1
6 4 1 1
7 1 1 1
8 4 1 1
9 3 0 0
10 6 2 0
11 1 1 1
12 8 2 0
13 1 1 1
14 8 2 0
15 7 2 0
16 8 0 0

3.2 The τ-games


The nim-sequences of mtau and stau do not yet appear in OEIS. The number of di-
visors is multiplicative in the following sense: τ(n) = (a1 + 1) ⋅ ⋅ ⋅ (ak + 1), where n =
a a
p1 1 ⋅ ⋅ ⋅ pk k , with prime pi .

3.2.1 Move-to the number of proper divisors

Consider mtau, where the single option is the number of proper divisors.4 A heap of
size one has no option, so the nim-value sequence starts at 𝒮𝒢 (1) = 0. Let us list the
first few nim-values.

4 Note that if we remove the word proper here, then both 1 and 2 become loopy, and thus all games
would be drawn. See also Section 8 for some more reflections on “loopy” or “cyclic” games.
264 | D. E. Iannucci and U. Larsson

n opt(n) 𝒮𝒢(n)

1 ⌀ 0
2 1 1
3 1 1
4 2 0
5 1 1
6 3 0
7 1 1
8 3 0
9 2 0

Note that each prime has nim-value 1 because they have only one proper divisor. From
this small table we may deduce many more nim-values. The first few 0-positions are

1, 4, 6, 8, 9, 10, 12, 14, 15, 18, 20, 21, 22, 24, 25, 26, 27, 28, 30, . . . .

Note that 16 is the first composite number that is not included, 36 is the second one,
and then 48, 80, 81, 100, etc. What is special about these composite numbers?
The sequence of all ones has some resemblance to the sequence of all numbers
with a nonprime number of proper divisors. As mentioned, 16 and 36 are the first
composite members of this sequence. These two numbers are the smallest compos-
ite numbers with a composite (nonprime) number of proper divisors; such numbers
generalize the primes, because primes also have a nonprime number of proper divi-
a a
sors. We are interested in the smallest number n = p1 1 ⋅ ⋅ ⋅ pk k for which τ(n) − 1 =
(a1 + 1) ⋅ ⋅ ⋅ (ak + 1) − 1 ∈ {16, 36, 48, 80, . . .}, that is, the smallest number n such that
(a1 + 1) ⋅ ⋅ ⋅ (ak + 1) ∈ {17, 37, 49, 81, . . .}. An obvious candidate is n = 216 with a1 = 16
and otherwise ai = 0. However, it turns out that n = 26 ⋅ 36 = 46656 < 216 gives
τ(46656) = (6 + 1)(6 + 1) = 49, and this is indeed the smallest such number. Thus we
have the following observation.

Observation 1. Consider mtau. If n < 46656, then 𝒮𝒢 (n) = 1 if and only if n consists
of a nonprime number of divisors.

We note that neither sequence is listed in OEIS (see Section 3.3.1 for a listed similar
sequence).

3.2.2 Subtract the number of divisors

Consider stau, where the single option is the number minus its number of divi-
sors. This variation has a mysterious nim-sequence beginning with 𝒮𝒢 (0) = 0.
A heap of size one has one divisor with an option to zero. A heap of size two has
two divisors, and hence the option is zero, and so on: 0, 1, 1, 0, 0, 1, 0, 0, 1, 1, 1, 0, 1, 1, 0,
Game values of arithmetic functions | 265

1, 1, 0, 0, 1, 1, 1, . . . . The 1s occur at

1, 2, 5, 8, 9, 10, 12, 13, 15, 16, 19, 20, . . . .

These sequences do not yet occur in OEIS.

3.3 The Ω and ω-games


The sequence of number of prime factors, counted with multiplicity, is called Ω(n).
Otherwise, when only the distinct primes are counted, it is called ω(n).5
Somewhat surprisingly, nim-value sequences for the games that count the number
of prime divisors do not yet appear in OEIS.

3.3.1 Move-to the number of prime divisors

The nim-value sequence of mΩ starts at a heap of size one, of nim-value 0, by defi-


nition. Any prime has a move-to one, so all primes have nim-value one, a square has
a move-to the heap of size 2 and hence has nim-value 0, and so on. The nim-value
sequence starts:

0, 1, 1, 0, 1, 0, 1, 0, 0, 0, 1, 0, 1, 0, 0, 1, . . . .

The indices of the ones is a generalization of the primes:

2, 3, 5, 7, 11, 13, 16, 17, 19, 23, 24, 29, 31, 36, 37, 40, 41, . . . .

The number 64 is in the sequence, and this distinguishes it from A026478. Still it is
not exactly A167175, since not all numbers with a nonprime number of prime divisors
are included. The sequences coincide until 216 − 1 though, since the first such number
to be excluded is 216 . Via a similar (but easier) reasoning as in Section 3.2.1, we have
the following observation.

Observation 2. Consider mΩ. If n < 216 , then 𝒮𝒢 (n) = 1 if and only if n consists of a
nonprime number of prime divisors counted with multiplicity.

3.3.2 Subtract the number of prime divisors

Here we consider the ruleset sΩ “subtract the number of prime divisors”. A heap of size
one has nim-value zero by definition. A heap of size two has a move to a heap of size
one and has nim-value one. A heap of size three has a move to a heap of size two and

a a a
5 That is, if n has canonical form n = p1 1 p2 2 ⋅ ⋅ ⋅ pk k , then Ω(n) = a1 + a2 + ⋅ ⋅ ⋅ + ak and ω(n) = k.
266 | D. E. Iannucci and U. Larsson

has nim-value zero. The nim-value sequence starts: 0, 1, 0, 0, 1, 1, 0, 0, 1, 1, 0, 0, 1, 1, 0, 1,


0, 1, 0, 1, . . . , and the indices of the ones are located at 2, 5, 6, 9, 10, 13, 14, 16, 18, 20, 21,
23, . . . .
These sequences do not appear in OEIS.

3.3.3 Move-to the number of distinct prime divisors

The nim-value sequence of mω “move-to number of distinct prime divisors” starts at


one, of nim-value zero. The first few nim-values are: 0, 1, 1, 1, 1, 0, 1, 1, 1, 0, 1, 0, . . . , and
the corresponding indices of the ones are 2, 3, 4, 5, 7, 8, 9, 11, 13, . . . .
The first nim-value that distinguishes it from mΩ is for the heap of size 4. Since it
has only one distinct factor, this game behaves like a prime, and the nim-value is one.
Six is the first number that has more than one distinct factor. Hence 7! is the smallest
number with distinct factors for which the nim-value is one.

Observation 3. If n < 7!, then 𝒮𝒢 (n) = 1 if and only if n contains exactly one distinct
factor.

3.3.4 Subtract the number of distinct prime divisors

The nim-value sequence of sω “subtract the number of distinct prime divisors” starts
at one, which does not have any prime divisor, and hence of nim-value zero. Next, two
has the option one, three has the option two, and four has the option three. The first
few nim-values are:

0, 1, 0, 1, 0, 0, 1, 0, 1, 1, 0, 0, 1, 1, 0, 1, 0, 0, 1, 1, 0, 0, 1, . . . ,

with ones at indices 2, 4, 7, 9, 10, 13, 14, 16, 19, 20, 23, . . . . Neither of these sequences ap-
pears in OEIS.

4 Dividing games
The ruleset dividing deploys the notion of a disjunctive sum in their recursive defini-
tion, that is, an option is typically, with some exception, a disjunctive sum of games.
A reference that goes into detail of such games is [3].

4.1 The dividing game


For this game, the position is a natural number. The codomain Y in the definition of
X
opt is 22 , that is, an option is a disjunctive sum of natural numbers. A player divides
Game values of arithmetic functions | 267

the current number into equal parts, and, as usual, we write “+” to separate parts into
new game components. To avoid long chains of components, we use multiplicative
notation in the sense that x × y means y copies of x (i. e., x + ⋅ ⋅ ⋅ + x). In this notation,
addition is commutative, but multiplication is not. For example, opt(10) = {5 × 2, 2 ×
5, 1 × 10}. The current player moves in precisely one of the components and leaves the
other ones unchanged. For example, a move from 5 + 5 is to 5 + 1 × 5 = 5 (because no
move is possible from 1 × 5), and by symmetry this is the only admissible move. The
number of options is τ(n)−1, and here is the table of the first few options together with
its nim-values:

n opt(n) 𝒮𝒢(n)

1 ⌀ 0
2 1+1 1
3 1×3 1
4 2 × 2, 1 × 4 1
5 1×5 1
6 3 × 2, 2 × 3, 1 × 6 2
7 1×7 1
8 4 × 2, 2 × 4, 1 × 8 1

The options simplify by mimicking strategy and by a heap of size one being terminal:

n opt(n) 𝒮𝒢(n)

1 ⌀ 0
2 1 1
3 1 1
4 1 1
5 1 1
6 1, 2 2
7 1 1
8 1 1

Let Ω2 (n) denote the number of prime factors of n, where the powers of 2 are counted
without multiplicity, and the powers of odd primes are counted with multiplicity.

Theorem 9. Consider dividing. For all n ∈ ℕ, 𝒮𝒢 (n) = Ω2 (n).

Proof. 𝒮𝒢 (1) = 0, and 1 has no any prime components. Suppose that n is a power of
two. Then 𝒮𝒢 (n) = 1, since each option allows the mimic strategy. Similarly, if n is a
prime, then 𝒮𝒢 (n) = 1. Suppose that n = 2k p1 ⋅ ⋅ ⋅ pj with k ∈ ℕ0 and each pi odd. We
use induction to prove that 𝒮𝒢 (n) = j if k = 0 and 𝒮𝒢 (n) = j + 1 otherwise. If k = 0, then
n can be split into an odd number of components each having m prime factors for each
m ∈ [1, j − 1]. Induction and the nim-sum together with the mex-rule give the result in
268 | D. E. Iannucci and U. Larsson

this case. Similarly, if k > 0, then n can be split into an odd number of components of
m prime factors for each m ∈ [1, j], which proves that 𝒮𝒢 (n) = j + 1 in this case.

Example 10. Suppose the position is 18 + 7 = 2 ⋅ 32 + 7. How do we play to win? The


nim-value is 3 ⊕ 1 = 2, where ⊕ denotes the nim-sum. Hence the next player has a good
move. The good move turns the 18-component to nim-value 1, that is, we divide it into
an odd number of even numbers with no odd factor. This can be done in only one way:
a player moves to 2 × 9 + 7 = 2 + 7, and clearly 𝒮𝒢 (2 + 7) = 0. The next player has exactly
two options, but, either way, a player will finish the game in the next move.

4.2 The dividing and remainder game


The ruleset divide-and-residue is an extension of dividing, where a player is allowed
to divide n into k equal parts d and a remainder r that is smaller than the parts. Thus
here we have a lot more options (for a generic game) than dividing, which is obvious
by the representation n = k × d + r with 0 ≤ r < d. By moving we are free to choose any
1 ≤ d < n, so we have n − 1 options for all n > 0. The nim-sequence starts as follows:

n opt(n) 𝒮𝒢(n)

1 ⌀ 0
2 1+1 1
3 2 + 1, 1 × 3 2
4 3 + 1, 2 × 2, 1 × 4 1
5 4 + 1, 3 + 2, 2 × 2 + 1, 1 × 5 2
6 5 + 1, 4 + 2, 3 × 2, 2 × 3, 1 × 6 3
7 6 + 1, 5 + 2, 4 + 3, 3 × 2 + 1, 2 × 3 + 1, 1 × 7 2
8 7 + 1, 6 + 2, 5 + 3, 4 × 2, 3 × 2 + 2, 2 × 4, 1 × 8 3

An even number of heaps of the same sizes reduces to a heap of size one. A heap of
size one in a disjunctive sum gets removed. We get an equivalent reduced table:

n opt(n) 𝒮𝒢(n)

1 ⌀ 0
2 1 1
3 2, 1 2
4 3, 1 1
5 4, 3 + 2, 1 2
6 5, 4 + 2, 2, 1 3
7 6, 5 + 2, 4 + 3, 2, 1 2
8 7, 6 + 2, 5 + 3, 2, 1 3
9 8, 7 + 2, 6 + 3, 5 + 4, 3, 1 4
10 9, 8 + 2, 7 + 3, 6 + 4, 3, 2, 1 3
Game values of arithmetic functions | 269

Note that when we remove pairs of equal numbers, sometimes we must add the option
“1” to symbolize a move to a terminal position of nim-value 2 + 2 = 0. From this table
we may deduce that the game 7 + 3 from the first paragraph in the paper is indeed a
losing position. divide-and-residue has a mysterious nim-sequence, as depicted in
Figure 14.3.

Figure 14.3: The initial 20000 nim-values of divide-and-residue. They just about touch the nim-
value 28 = 256.

Here are the 50 first nim-values, of the form [heap size, nim-value]:

[1, 0], [2, 1], [3, 2], [4, 1], [5, 2], [6, 3], [7, 2], [8, 3], [9, 4], [10, 3], [11, 4], [12, 3], [13, 4], [14, 3],
[15, 4], [16, 3], [17, 4], [18, 5], [19, 4], [20, 5], [21, 3], [22, 5], [23, 4], [24, 2], [25, 1], [26, 5],
[27, 6], [28, 5], [29, 6], [30, 2], [31, 6], [32, 5], [33, 3], [34, 8], [35, 9], [36, 8], [37, 9], [38, 8],
[39, 9], [40, 8], [41, 9], [42, 4], [43, 9], [44, 4], [45, 9], [46, 8], [47, 9], [48, 4], [49, 9], [50, 4].

Early nim-values tend to be odd for heaps of even size, and even for those of odd
size. By an elementary argument we get: the heap of size 25 is the largest heap of nim-
value one, and we can prove the analogous statement for a few more small nim-values.
We conjecture that any fixed nim-value occurs finitely many times.
For the upper bound, the nim-values seem to be bounded by n3/5 for sufficiently
large heap sizes n. An empirical observation is that the growth of nim-values appears
to be halted at powers of two. For example, the nim-value 22 starts to appear at heap
size 9 but does not increase beyond 22 + 20 until the heap of size 27.

Conjecture 1. Consider divide-and-residue. Then each nim-value occurs and at most


a finite number of times. Moreover, 𝒮𝒢 (n)/n → 0 as n → ∞.
270 | D. E. Iannucci and U. Larsson

There is no big surprise that this game is hard, since it is an extension of grundy’s
game [1, 8]. Indeed, the options of divide-and-residue in which the divisor d is
greater than n/2 correspond to the rule of splitting a heap into the two unequal parts
of grundy’s game. If we define the ruleset complement-grundy by requiring that
k ≥ 2 in divide-and-residue, then we can prove the second statement in Conjec-
ture 1 for this new game. Let us tabulate the first few nim-values, where options are
displayed in reduced form:

n opt(n) 𝒮𝒢(n)

1 ⌀ 0
2 1 1
3 1 1
4 1 1
5 1 1
6 2, 1 2
7 2, 1 2
8 2, 1 2
9 3, 2, 1 2
10 3, 2, 1 2
11 3, 3 + 2, 2, 1 2

Figure 14.4 shows that the initial regularity of nim-values is replaced by more com-
plexity further down the road, although not as severely as for divide-and-residue.
Note that the two games appear to share some geometric properties such as a local
stop of nim-value growth at powers of two and a bounded number of occurrences for
each nim-value. In this case though, some nim-values do not appear, such as 12, 15, 20,
etc. We do not yet know if the omitted nim-values can be described by some succinct
formula, and we do not even know if the occurrence of each nim-value is finite.

Theorem 10. Consider complement-grundy. Then 𝒮𝒢 (n)/n → 0 as n → ∞.

Proof. Consider the nim-value 2k . If it does not appear, then we are done. Suppose it
appears for the first time at heap size nk . By the mex rule, if a nim-value is greater than
2k , then it must have nim-value 2k in its set of options. By the rules of game this can
only happen for a heap of size m ≥ 3⋅nk . In particular, this holds for the nim-value 2k+1 ,
which occurs for the first time at m = nk+1 , say. Thus, for nim-values that are powers
of two, we get 32 𝒮𝒢 (nk )/nk ≥ 𝒮𝒢 (nk+1 )/nk+1 . This upper bound holds for arbitrary nim-
values, since the lower bound on where the power of two 2k+1 can appear is the same
lower bound where any other nim-value greater than 2k may appear.

There are two simpler variations of divide-and-residue.


1) The remainder is not included in the disjunctive sum of an option: divide-throw-
residue.
2) Only the remainders are the options: residue-throw-divisor.
Game values of arithmetic functions | 271

Figure 14.4: The initial 20000 nim-values of complement-grundy.

The patterns of the nim-values of these rulesets are displayed in Figure 14.5. For varia-
tion 1 (to the left), we prove that for heaps greater than 1, the nim-sequence coincides
with OEIS A003602: if n = 2m (2k − 1) for some m ≥ 0, then a(n) = k. The Sprague–
Grundy sequence starts at a heap of size one with nim-values as follows:

0, 1, 2, 1, 3, 2, 4, 1, 5, 3, 6, 2, 7, 4, 8, 1, 9, 5, 10, 3, 11, 6, 12, . . . .

Figure 14.5: The initial nim-values of divide-throw-residue and residue-throw-divisor, respec-


tively.

Namely, divide-throw-residue has the same solution as maliquant, the game where
the options are the nondivisor singletons; recall Theorem 3, where this result is ex-
pressed as an index function io , the index of the largest odd divisor.
272 | D. E. Iannucci and U. Larsson

Theorem 11. Consider divide-throw-residue. Then 𝒮𝒢 (n) = io (n) = k if n = 2m (2k − 1)


for some integer m ≥ 0.

Proof. Observe that the options in the interval [⌊n/2⌋ + 1, n − 1] are the same as for
maliquant. Assume first that n is even. Then n/2 + n/2 is an option in divide-throw-
residue, but n/2 is not an option in maliquant. However, n/2 + n/2 only contributes
the nim-value 0 and may be ignored. Consider next the disjunctive sum m+⋅ ⋅ ⋅+m with
an odd number of components adding up to n. Then there is a power of 2, say 2k , such
that 2k m ∈ [⌊n/2⌋ + 1, n − 1], i. e., 2k m ∤ n. So, by induction, 𝒮𝒢 (m) = 𝒮𝒢 M (2k m), where
the M indicates maliquant. On the other hand, there are options of maliquant of the
form m ∤ n, m < n/2. They do not have a match in divide-throw-residue. However, as
we saw in the proof of Theorem 3, they do not contribute to the nim-value computation
in maliquant. The case of odd n is similar.

For variation 2, we observe the following nim-sequence: 0, 1, 1, 1, 2, 2, 2, 2, 2, 2, . . . ,


i. e., for n > 0 if 3 ⋅ 2k copies of k + 1 have appeared append 3 ⋅ 2k+1 copies of k + 2 as the
next nim-values.

Theorem 12. Consider residue-throw-divisor. For all n ∈ ℕ, 𝒮𝒢 (n) = k if

n ∈ {3(2k−1 − 1) + 2, . . . , 3(2k − 1) + 1}.

Proof. We leave this proof to the reader.

5 The factoring games


Is there any game that has the number of prime factors of n ∈ ℕ as the sequence of nim-
values? The answer is yes, as given by the almost trivial aliquot game in Section 2.1.1.
There is a related game that decomposes into several components in play, namely, to
play to any factorization of n.

Example 11 (m-factoring). Let n = 12. Then the set of options is {6 + 2, 3 + 4, 2 + 2 + 3}.


The unique winning move is to 2 + 2 + 3, because the nim-values in the set of options
are 1, 1, 0, respectively. Hence 𝒮𝒢 (12) = 2.

Example 12 (s-factoring). Let n = 12. Then the set of options is {6+10, 9+8, 10+10+9}.
The nim-sequence starts:

0, 0, 1, 1, 1, 1, 1, 1, 2, 1, 1, 1, 2, 1, 1, 1, 1, 1, 2, 1, 2, 1, 1, 1, 1, 1,

where the first heap is the empty heap, and the second 0 is due to that 1 does not have
any prime factors. 𝒮𝒢 (12) = 2.
Game values of arithmetic functions | 273

m-factoring has a simple solution, but we do not yet understand s-factoring.


Recall the omega-functions from Section 3.3.

Theorem 13. Consider m-factoring, and let n ⩾ 2, where each option is a nontrivial
disjunctive sum of a factoring of n. Then 𝒮𝒢 (n) = Ω(n) − 1. If no two distinct components
may contain the same prime number, then 𝒮𝒢 (n) = ω(n) − 1.

Proof. If n is a prime, then 𝒮𝒢 (n) = 0, because no factoring to smaller components


is possible. If n is composite with k prime factors, then, by induction, it is possible
to play to an option of nim-value ℓ for each ℓ ∈ {1, . . . , k − 2} by factoring n into one
number with ℓ prime factors and k − ℓ other prime components. On the other hand,
the nim-value k − 1 cannot be obtained as a move option, since (x1 − 1) ⊕ ⋅ ⋅ ⋅ ⊕ (xℓ − 1) ⩽
(x1 − 1) + ⋅ ⋅ ⋅ + (xℓ − 1) ⩽ k − 2 if k = x1 + ⋅ ⋅ ⋅ + xℓ . The proof of the second part is similar.
Note that dividing from Section 4.1 is in fact move-to dividing, m-dividing. Sim-
ilarly to s-factoring, we propose as an open problem the subtract dividing game,
s-dividing.

6 Full set games


In fullset maliquot a player moves to all the proper divisors in a disjunctive sum.6
Let us display the first few numbers with their options and nim-values:

n opt(n) 𝒮𝒢(n)

1 ⌀ 0
2 1 1
3 1 1
4 1+2 0
5 1 1
6 1+2+3 1
7 1 1
8 1+2+4 0
9 1+3 0

The nim-value sequence starts 0, 1, 1, 0, 1, 1, 1, 0, 0, 1, 1, 0, 1, 1, 1, 0, 1, 0, 1, 0, 1, 0, 1,


0, . . . . The nonunit proper divisors of 24 are 2, 3, 4, 6, 8, and 12. The only square-free
ones are 2, 3, and 6, an odd number. Such observations are relevant for the proof of
the location of the 0s.

6 Obviously, we need to exclude the divisor n | n; the word “proper” is implicit in the naming.
274 | D. E. Iannucci and U. Larsson

Theorem 14. Consider fullset maliquot. Then 𝒮𝒢 (n) ∈ {0, 1}, and 𝒮𝒢 (n) = 1 if and
only if n > 1 is square-free.

Let us indicate the idea of the proof. The nim-value 𝒮𝒢 (4) = 0 because the only
nonunit proper divisor 2 is square-free, and 𝒮𝒢 (8) = 0, because there is exactly one
square-free proper divisor, namely 2. In the proof, we will use the idea that 𝒮𝒢 (n) = 0
if and only if n has an even number of square-free proper divisors.

Proof. We induct on the number of divisors.


If n = p is prime, then there is an even number, namely 0, of square-free nonunit
proper divisors. The nim-value 𝒮𝒢 (p) = 1 is correct, because the move to the heap of
size one is terminal.
Consider an arbitrary number n. Each move will alter the nim-value modulo 2. We
must relate this to the nonunit square-free proper divisors in the components of the
option of n. By induction, if this number is even if and only if 𝒮𝒢 (n) = 0, then we
are done. Henceforth, we will ignore the component of a heap of size one, since it has
nim-value 0 and will not contribute to the disjunctive sum.
Suppose first that n = p2 is a perfect square. Then the option is the prime p, and
hence 𝒮𝒢 (p2 ) = 0. Indeed, there is an odd number of square-free nonunit divisors.
If n = pt , t > 2, is any other power of a prime, then we must prove that 𝒮𝒢 (n) = 0.
The set of nonunit proper divisors is {p, . . . , pt−1 }, and hence there is exactly one
square-free divisor in the disjunctive sum p + ⋅ ⋅ ⋅ + pt−1 . By induction we get that each
component except p has nim-value 0. This proves this claim.
Next, suppose n = pq, where p and q are primes. Then the option is p + q of nim-
value 1 ⊕ 1 = 0. Hence 𝒮𝒢 (pq) = 1, and n has an even number of square-free non-unit
proper divisors.
Similarly, if n = p1 ⋅ ⋅ ⋅ pj is a product of distinct primes, then 𝒮𝒢 (n) = 1. This follows
because the number of proper nonunit divisors,

j
∑ ( ), (14.2)
1≤i<j
i

is even, where j is the number of prime factors in n (this holds both for even and odd n).
By induction each such individual component divisor has nim-value 1. Note that by
moving in one such divisor the number of components in the disjunctive sum changes
parity; if moved in a prime, then the prime is deleted, and if moved in pq, then this
component splits to p + q, and so on.
By combining these observations we prove the general case of an arbitrary prime
factorization. Assume that n contains a square. We must show that 𝒮𝒢 (n) = 0. By
induction we are concerned only with the square-free divisor components, and we
show that the number of such divisors is odd.
Indeed, if we assume that j in (14.2) is the number of distinct prime factors, then
there is one missing term (jj). Namely, the divisor composed of all square-free factors
Game values of arithmetic functions | 275

must be counted whenever n contains a square. Apart from this, no new square-free
divisor is introduced. Thus the number of such components is odd, and since by in-
duction they have nim-value 1, we have 𝒮𝒢 (n) = 0.
We have investigated a few more of the fullset games, including those in the sub-
class “subtraction”, but not yet found other examples with sufficient regularity to
prove basic correspondence with number theory. Apart from fullset maliquot, this
class for now remains a mystery. For example, for fullset totient, the sequence
starts 0, 1, 0, 1, 1, 0, 0, 0, 0, 1, 1, 1, 1, 0, 1, 1, 0, 0, 0. The heap of size one has nim-value zero
by definition, and the heap of size two has nim-value one, because one is relatively
prime with two. 𝒮𝒢 (3) = 0, because the option is 1 + 2 of nim-value 0 ⊕ 1 = 1. The
sequence of the indices of the ones is 2, 4, 5, 10, 11, 12, 13, 15, and so on. This sequence
does not yet appear in OEIS.

7 Powerset games
We study six versions of the powerset games on arithmetic functions, and we begin by
listing the first 20 nim-values for the respective ruleset. All start at a heap of size one,
except item 2, which starts at the empty heap (defined as terminal).
1. powerset maliquot: move-to an element in the powerset of the proper divisors.

0, 1, 1, 2, 1, 2, 1, 4, 2, 2, 1, 4, 1, 2, 2, 8, 1, 4, 1.

2. powerset saliquot: subtract an element in the powerset of the divisors.

0, 1, 2, 1, 4, 1, 2, 1, 8, 1, 2, 1, 4, 1, 2, 1, 16, 1, 2, 1.

3. powerset maliquant: move-to an element in the powerset of the nondivisors.

0, 0, 1, 0, 2, 1, 4, 8, 16, 2, 32, 1, 64, 4, 128, 8, 256, 16, 512.

4. powerset saliquant: subtract an element in the powerset of the nondivisors.

0, 0, 1, 1, 2, 1, 4, 4, 8, 2, 16, 8, 32, 32, 64, 64, 128, 8, 256, 64.

5. powerset totative: move-to an element in the powerset of the relatively prime


numbers smaller than the heap.

0, 1, 2, 1, 4, 1, 8, 1, 2, 1, 16, 1, 32, 1, 2, 1, 64, 1, 128.

6. powerset nontotative: move-to an element in the powerset of the nonrelatively


prime numbers smaller than the heap.

0, 0, 0, 1, 0, 2, 0, 4, 1, 8, 0, 16, 0, 32, 4, 64, 0, 128, 0.


276 | D. E. Iannucci and U. Larsson

For single heaps, these games tend to have nim-values powers of two. The intuition of
this is that by induction there is plenty opportunity, in a powerset, to construct any
number between the powers of two by using various sums of single heaps. We will
study the precise behavior in a couple of instances, namely items 2, 3, and 5.

Theorem 15. Consider powerset saliquot. Then 𝒮𝒢 (0) = 0, and 𝒮𝒢 (n) = 2p if 2p is


the largest power of two divisor of n ≥ 1.

Proof. A heap of size zero has nim-value 0 because it is terminal by definition. The
heap of size one has nim-value 1 = 20 because 1 | 1. The heap of size two has nim-
value 2 = 21 because 1, 2 | 2, and 𝒮𝒢 (2 − 1) = 1, 𝒮𝒢 (2 − 2) = 0. Both these cases satisfy
the largest power of two divisor criterion.
Suppose the statement holds for all numbers smaller than the heap size n = 2p a
with a odd, say. We must show that all nim-values less than 2p exist among the options
of n. For each q < p, we will find a number 0 ≤ m < n with 2q largest power of 2
divisor of m, and where n − m is a divisor of n. For example, if m = n − 2q a ∈ ℕ, then
n − m = 2q a | n, and m = 2q a(2p−q − 1) has the greatest power of two divisor 2q . By
induction, 𝒮𝒢 (m) = 2q . Let q range between 0 and p − 1. By the rules of powerset and
by using the disjunctive sum operator, it suffices to establish that all nim-values less
than 2p exist among the options of n.
Next, we must prove that the nim-value 2p does not exist among the options. It suf-
fices to show that no individual heap in an option, which is a disjunctive sum, is of the
same form as n. This follows by applying nim-sum, since, by induction, all numbers
smaller than n have nim-values powers of two.
A divisor of n is of the form 2q y, where y | a is odd, and where q ≤ p.
Suppose first q = p. Then n − 2p y = 2p y(a/y − 1); but a/y is odd, and hence a/y − 1
is even, so n − 2p y = 2z b with z > p and b odd, unless a = y when n − 2p y = 0.
In case q < p, we get n − 2q y = 2q y(2−q n/y − 1), and since 2−q n/y − 1 is odd, by
induction, the heap is not of the same form (since q < p).

Recall the indexing function io of largest odd divisor, concerning the singleton
version of maliquant. It applies here as well with some initial modification; although
it looks like one could “peel” off the 2s, it does not work due to the irregular set of
initial nim-values.

Theorem 16. Consider powerset maliquant. The sequence starts at a heap of size
one, and the first eight nim-values are 0, 0, 1, 0, 2, 1, 4, 8. Otherwise, if n = 2k + 1, k ≥ 4,
then 𝒮𝒢 (n) = 2k , and if n ≥ 10 is even, then 𝒮𝒢 (n) = 𝒮𝒢 (n/2).

Proof. The smaller heaps are easy to justify by hand. The heap of size 8 is pivotal. It
achieves nim-value 0 by the option 3 + 6, both numbers being nondivisors. The nim-
values 1, 2, 4 may be combined freely by using the nondivisor heaps 5, 6, 7. Hence the
nim-values of the small heaps are verified.
Game values of arithmetic functions | 277

For the base cases, we consider the heaps of sizes 9 and 10, of nim-values 16 = 24 ,
with 2 ⋅ 4 + 1 = 9 and 2 = 𝒮𝒢 (5), respectively.
For the induction, let us start with a heap of even size, say n = 4t + 2. It suffices to
show that 𝒮𝒢 (n) = 𝒮𝒢 (n/2). Observe that each number between n/2 and n is a nondi-
visor to n and hence may be part of a disjunctive sum to build desirable nim-values by
induction. Since n/2 = 2t+1 is odd, each power of two 24 , . . . , 2t appears among the nim-
values for heap sizes in [9, n/2]. By induction each power of two 20 , . . . , 2t−1 appears as
a nim-value in the heap interval I = [n/2−1, . . . , n−1]. Namely, for y ∈ [0, t −1], multiply
2y + 1 by 2 iteratively until 2s (2y + 1) ∈ I. Thus all numbers smaller than 2t appear as
options, but note that the nim-value 2t appears only as a nim-value for a divisor of n,
and hence this is the minimal exclusive. This proves that 𝒮𝒢 (n) = 2t if n is even, as
desired.
Now consider odd n = 2t + 1, say. By induction the nim-value of each heap smaller
than n is less than 2t . In case of t even, the powers of two 2t/2 , . . . , 2t−1 appear for nim-
values of odd heaps in the interval [t, . . . , 2t − 1]. Similarly to the case of even n, the
smaller power of two nim-values can also be found in this interval. Therefore each
nim-value smaller than 2t appears as an option of a disjunctive sum of nondivisors of
n. Hence the minimal exclusive is 𝒮𝒢 (n) = 2t .

Recall the function ip , the index of the smallest prime divisor of n, where the prime
2 has index 1, for the solution of totative from Section 2.3. It applies for the powerset
game as well.

Theorem 17. Consider powerset totative. Then 𝒮𝒢 (n) = 2i−1 , where i = ip .

Proof. The nim-value of a heap of size one is 0, since it is terminal. A heap of size
two has a move to the heap of size one, because 1 is relatively prime with all numbers
greater than 1. Hence 𝒮𝒢 (2) = 1 = 20 . Suppose the statement holds for all numbers
smaller than n > 1.
If n is even, then we must prove that there is a move to nim-value 0 but no move
to nim-value 1. The first part is done in the first paragraph. Hence let us show by in-
duction that there is no move to nim-value 1. Since all smaller heaps of odd size have
even nim-values, a disjunctive sum of nim-value 1 must contain a heap of even size.
This is impossible, since heaps of even size are not relatively prime with n.
Suppose that n is odd, so that the index of the smallest prime divisor of n is i > 1.
We must show that 𝒮𝒢 (n) = 2i−1 . By induction each smaller prime divisor with index
q < i, say, has appeared in a heap size smaller than n, with nim-value 2q−1 . Since
any disjunctive sum of heap sizes relatively prime with n is allowed as an option, by
induction each nim-value smaller than 2i−1 can be obtained.
Next, we show that there is no option of nim-value 2i−1 . This generalizes the idea
used in the second paragraph. A disjunctive sum of nim-value 2i−1 must contain a com-
ponent of nim-value 2i−1 . However, by induction those heap sizes are not relatively
prime with n.
278 | D. E. Iannucci and U. Larsson

8 Discussion. Future work


A natural generalization of counting the number of elements satisfying an arithmetic
function is to instead consider their sum or partial sums. For example, consider the
sum generalization of the mtau, that is, the option of n is the sum of the proper di-
visors of n. For example, 4 has the proper divisors 1 and 2, and therefore the option
is 3. Loops and cycles occur for perfect numbers (those where the sum of proper di-
visors equals the number) and (temporarily) increased heap sizes for abundant num-
bers (those where the sum of proper divisors is greater than the number). The first loop
appears at 1 + 2 + 3 = 6 (where “+” is the arithmetic sum).
This might at first sight seem to disqualify the Sprague–Grundy function,7 but
in fact, since the game is binary, the cycles are trivial in the following sense. If we
play a disjunctive sum of games where one component will not end, then the full
game will not end. Conversely, if no component contains a cycle, but perhaps tem-
porarily increasing heap sizes, then the full game will end, and a winner may be de-
clared. The nim-value sequence of this ruleset begins at a heap of size one as follows:
0, 1, 1, 0, 1, ∞, 1, 0, 1, 1, 1, ?, where the infinity at heap size 6 indicates the loop 1+2+3 = 6.
Let us compute the nim-value for n = 12, which is indicated by “?” in the sequence
above. The sum of proper divisors is 16 (temporary increase), followed by options 15
and 9, in the next two moves. The sequence above indicates that 𝒮𝒢 (9) = 1, and there-
fore 𝒮𝒢 (12) = 0. The recurrence where a number is mapped to the sum of its proper
divisors has been studied in number theory literature, without the games’ twist. It
seems worth some more attention.
Even more interesting is the same ruleset but where the player may pick any par-
tial sum of proper divisors. We have the following table, where, for example, the op-
tions of a heap of size 4 are 1, 2, and 1 + 2.

n opt(n) 𝒮𝒢(n)

1 ⌀ 0
2 1 1
3 1 1
4 1, 2, 3 2
5 1 1
6 1, 2, 3, 4, 5, 6 ∞3
7 1 1
8 1, 2, 3, 4, 5, 6, 7 ∞3
9 1, 3, 4 3

7 Fraenkel et al. have developed a generalized Sprague–Grundy function for cyclic games on a finite
number of positions.
Game values of arithmetic functions | 279

Here ∞3 means the nim-value 3, but with an additional option an infinity, namely ∞3 .
Consider, for example, the disjunctive sum of heaps 6 + 9. Then every move apart from
playing to ∞3 is losing. So this game is a draw. However, playing instead 6 + 7, the
first player wins by moving to 2 + 7, 3 + 7, or 5 + 7, that is, a loopy game component
is sensitive to the disjunctive sum. The ω game also seems to have an interesting sum
variation.
Wythoff partizan subtraction [6] studies a partizan so-called complementary sub-
traction game. The players move options are conveniently represented by one single
sequence of natural numbers by letting one of the players subtract integers from the
sequence, whereas the other player subtracts positive integers that do not appear in
the sequence (provided that the heap remains nonnegative). Following that idea, here
any arithmetic function defines a partizan move-to or subtraction game by letting one
of the players play numbers from the arithmetic function, whereas the other plays
numbers from its negation. The partizan game values (canonical forms) of such games
remain big open problems.

Bibliography
[1] E. R. Berlekamp, J. H. Conway, R. K. Guy. Winning Ways, Academic Press, London, 1982.
[2] C. Bouton, Nim, a game with a complete mathematical theory, Annals of Math., 2nd Ser. 3
(1901–2), 35–39.
[3] A. Dailly, E. Duchene, U. Larsson, G. Paris, Partition games, Discrete Appl. Math. 285 (2020),
509–525.
[4] P. Grundy, Mathematics and games, Eureka 2 (1939), 6–8.
[5] G. H. Hardy and E. M. Wright, An Introduction to the Theory of Numbers (5th ed.), The Clarendon
Press, Oxford University Press, New York, 1979.
[6] N. Mc Kay, U. Larsson, R. J. Nowakowski, A. Siegel, Wythoff partizan subtraction, Int. J. Game
Theory 47 (2018), 613–652. Special Issue on Combinatorial Games, 2018. Invited paper from
Combinatorial Game Theory Colloquium, Lisbon, 2015.
[7] H. Shapiro, An arithmetic function arising from the Φ function, Am. Math. Mon. 50:1 (1943),
18–30.
[8] N. Sloane, The On-Line Encyclopedia of Integer Sequences (OEIS), website at http://oeis.org/.
[9] R. Sprague, Über mathematische Kampfspiele, Tôhoku J. Math. 41 (1936), 438–444.
Yuki Irie
A base-p Sprague–Grundy-type theorem for
p-calm subtraction games: Welter’s game and
representations of generalized symmetric
groups
Abstract: For impartial games Γ and Γ′ , the Sprague–Grundy function of the disjunc-
tive sum Γ + Γ′ is equal to the Nim-sum of their Sprague–Grundy functions. In this
paper, we introduce p-calm subtraction games and show that for p-calm subtraction
games Γ and Γ′ , the Sprague–Grundy function of a p-saturation of Γ + Γ′ is equal to the
p-Nim-sum of the Sprague–Grundy functions of their p-saturations. Here a p-Nim-sum
is the result of addition without carrying in base p, and a p-saturation of Γ is an impar-
tial game obtained from Γ by adding some moves. It will turn out that Nim and Welter’s
game are p-calm. Further, using the p-calmness of Welter’s game, we generalize a re-
lation between Welter’s game and representations of symmetric groups to disjunctive
sums of Welter’s games and representations of generalized symmetric groups; this re-
sult is described combinatorially in terms of Young diagrams.

1 Introduction
Base 2 plays a key role in combinatorial game theory. Specifically, the Sprague–Grundy
function of the disjunctive sum of two impartial games is equal to the Nim-sum of their
Sprague–Grundy functions. Here a Nim-sum is the result of addition without carrying
in base 2. In particular, the Sprague–Grundy value of a position in Nim equals the Nim-
sum of the heap sizes. It is rare that the Sprague–Grundy function of an impartial game
can be written explicitly like Nim. Another well-known example is Welter’s game, a
subtraction game like Nim; Welter [21] gave an explicit formula for its Sprague–Grundy
function by using the binary numeral system (see Theorem 2.8).
A few games related to base p have also been found, where p is an integer greater
than 1, not necessarily prime. For example, Flanigan found a game, called Rimp ;
the Sprague–Grundy value of a position in Rimp equals the p-Nim-sum of the heap

Acknowledgement: This work was supported by JSPS KAKENHI Grant Number JP20K14277.
The author would like to thank the anonymous referee for carefully reading the paper and for valuable
comments.

Yuki Irie, Research Alliance Center for Mathematical Sciences, Tohoku University, Miyagi, Japan,
e-mail: yirie@tohoku.ac.jp

https://doi.org/10.1515/9783110755411-015
282 | Y. Irie

sizes, where a p-Nim-sum is the result of addition without carrying in base p.1 We use
⊕p for the p-Nim-sum.2 For example, consider the heap (3, 7, 4). Whereas in Nim the
Sprague–Grundy value of (3, 7, 4) is equal to

3 ⊕2 7 ⊕2 4 = (1 + 2) ⊕2 (1 + 2 + 4) ⊕2 (4) = 0,

in Rim3 , its Sprague–Grundy value is equal to

3 ⊕3 7 ⊕3 4 = (3) ⊕3 (1 + 2 ⋅ 3) ⊕3 (1 + 3) = 2 + 3 = 5.

Thus we can say that Rimp is a base-p version of Nim. Irie [6] observed that there
are infinitely many base-p versions of Nim. From this observation he introduced
p-saturations and showed that a p-saturation of Nim is a base-p version of Nim, that
is, the Sprague–Grundy value of a position in a p-saturation of Nim equals the p-Nim-
sum of the heap sizes. Figure 15.1 shows an example of a 3-saturation of Nim. Although
we can take tokens from just one heap in Nim, it is allowed to take tokens from mul-
tiple heaps with a restriction in a p-saturation of Nim. Incidentally, Rimp is one of
the p-saturations of Nim, and p-saturations are defined for subtraction games (see
Section 3 for details).

Figure 15.1: An example of a 3-saturation of Nim.

Further, it was shown that a p-saturation of Welter’s game is a base-p version of Wel-
ter’s game [6]. In other words, we can obtain an explicit formula for the Sprague–
Grundy function of a p-saturation of Welter’s game by rewriting Welter’s formula with
base p (see Theorem 3.6).
In this paper an impartial game is defined to be a digraph such that the maximum
length of a walk from each vertex is finite. We will recall the basics of impartial games
in Section 2. Let Γ1 and Γ2 be two impartial games. Then the Sprague–Grundy function
of the disjunctive sum Γ1 + Γ2 is equal to the Nim-sum of their Sprague–Grundy func-
tions. The fundamental question of this paper is whether there exists an operation +p
such that the Sprague–Grundy function of Γ1 +p Γ2 is equal to the p-Nim-sum of their

1 Rimp was devised by James A. Flanigan in an unpublished paper entitled “NIM, TRIM and RIM.”
2 The operation ⊕p is different from that related to Moore’s Nimk−1 [10] and Li’s k-person Nim [8].
These games were analyzed using addition modulo k in base 2.
A base-p Sprague–Grundy-type theorem for p-calm subtraction games | 283

Sprague–Grundy functions. For example, if p = 2, then the ordinary disjunctive sum


satisfies this condition. We present a partial solution to the question. More precisely,
we consider the following condition on subtraction games Γ1 and Γ2 :
(PN) The Sprague–Grundy function of a p-saturation of Γ1 + Γ2 is equal to the p-Nim-
sum of the Sprague–Grundy functions of their p-saturations.

For a subtraction game Γ1 satisfying a saturation condition, we first give a necessary


condition for Γ1 to satisfy (PN) when Γ2 is Nim (Lemma 3.8). If Γ1 satisfies this necessary
condition, then Γ1 will be said to be p-calm. It will turn out that Nim and Welter’s game
are p-calm. Our main theorem (Theorem 3.11) states that p-calm subtraction games
satisfy (PN) and are closed under disjunctive sum. In particular, Γ1 and Nim satisfy
(PN) if and only if Γ1 is p-calm.
Using the p-calmness of Welter’s game, we can generalize a relation between Wel-
ter’s game and representations of symmetric groups; this result is described combi-
natorially in Section 4, and its algebraic interpretation is stated in Remark 4.13. Sato
[16, 17, 18] studied Welter’s game independently from Welter. Although Welter’s game
is usually described as a coin-on-strip game, Sato realized that this game can be con-
sidered as a game with Young diagrams, as we will describe in Section 4.1. He then
found that the Sprague–Grundy function of Welter’s game can be written in a sim-
ilar way to the hook formula, which is a formula for representations of symmetric
groups (Theorems 4.1 and 4.5). From this Sato conjectured that Welter’s game is re-
lated to representations of symmetric groups. A relation between them was discov-
ered by Irie [6]. Specifically, for a prime p, he showed a theorem on representations
of symmetric groups, which will be called the p′ -component theorem (Theorem 4.8);
by using this theorem he obtained an explicit formula for the Sprague–Grundy func-
tion of a p-saturation of Welter’s game.3 Here the p′ -component theorem is a result of
representations of symmetric groups with degree prime to p. Incidentally, on repre-
sentations with degree prime to p, there is a famous conjecture of McKay, which is one
of the most important conjectures in representation theory (see, for example, [11] for
details). In the present paper, we generalize the p′ -component theorem to disjunctive
sums of Welter’s games and representations of generalized symmetric groups, which
will be described combinatorially in terms of Young diagrams (Theorem 4.11).
This paper is organized as follows. Section 2 contains the basics of impartial
games. In Section 3, we recall p-saturations and define p-calm subtraction games. We
then prove that p-calm subtraction games satisfy the condition (PN) (Theorem 3.11);
moreover, using the p-calmness of Welter’s game, we generalize a property of Welter’s

3 Although the p′ -component theorem holds only when p is prime, a slightly weaker result holds even
when p is not prime. By using this we can obtain an explicit formula for the Sprague–Grundy function
of a p-saturation of Welter’s game.
284 | Y. Irie

game (Proposition 3.23), which yields a generalization of the p′ -component theorem


in Section 4.

2 Subtraction games
This section provides the basics of impartial games. We define subtraction games, dis-
junctive sums, and Sprague–Grundy functions. See [1, 3] for more details of combina-
torial game theory.
We first introduce some notation for impartial games. Let Γ be a digraph with ver-
tex set 𝒫Γ and edge set ℰΓ , that is, 𝒫Γ is a set, and ℰΓ ⊆ 𝒫Γ2 . As we have defined in the
introduction, the digraph Γ is called a (short) impartial game if the maximum length
lgΓ (A) of a walk from each vertex A is finite. Let Γ be an impartial game. The vertex set
𝒫Γ is called its position set. Let A and B be two positions in Γ. If (A, B) ∈ ℰΓ , then B is
called an option of A. If there exists a path from A to B, then B is called a descendant
of A. A descendant B of A is said to be proper if B ≠ A.

Example 2.1. The digraph with vertex set { 1, 2, 3 } and edge set { (1, 2), (2, 3), (1, 3) } is an
impartial game. However, the digraph with vertex set { 1, 2 } and edge set { (1, 2), (2, 1) }
is not an impartial game since it has the walk (1, 2, 1, 2, . . .) of infinite length.

Remark 2.2. Let Γ be an impartial game with at least one position. We can consider
Γ as the following two-player game. Before the game, we put a token on a starting
position A ∈ 𝒫Γ . The first player moves the token from A to its option B. Similarly,
the second player moves the token from B to its option C. In this way, the two play-
ers alternately move the token. The winner is the player who moves the token last.
For example, let Γ be the impartial game with position set { 1, 2, 3, 4 } and edge set
{ (1, 2), (2, 3), (1, 4) }, and start at position 1. The first player can move the token to ei-
ther position 2 or 4. If she moves it to position 2, then the second player moves it to 3
and wins. Thus she should move the token to 4.

We now define subtraction games. Let ℕ be the set of nonnegative integers. Ele-
ments in ℕm will be denoted by upper-case letters, and components of them by lower-
case letters with superscripts. For example, A = (a1 , . . . , am ) ∈ ℕm . Let 𝒫 ⊆ ℕm and
C ⊆ ℕm \ { (0, . . . , 0) }. Define Γ(𝒫 , C ) to be the impartial game with position set 𝒫 and
edge set

{ (A, B) ∈ 𝒫 2 : A − B ∈ C } .

The game Γ(𝒫 , C ) is called a subtraction game.

Example 2.3. Let

1 m
Cm = { C ∈ ℕ : wt(C) = 1 } ,
A base-p Sprague–Grundy-type theorem for p-calm subtraction games | 285

where wt(C) is the Hamming weight of C, that is, the number of nonzero components
of C. The subtraction game Γ(ℕm , Cm1 ) is called Nim and is denoted by 𝒩m . For exam-
ple, in 𝒩2 the options of (1, 2) are (0, 2), (1, 1), and (1, 0).

Example 2.4. Let ℳm = Γ(ℕm \ { (0, . . . , 0) } , Cm1 ). The winner in ℳm is the loser in
𝒩m , and ℳm is called misère Nim.

Example 2.5. Let

m i j
𝒫 = { A ∈ ℕ : a ≠ a for 1 ≤ i < j ≤ m } .

The subtraction game Γ(𝒫 , Cm1 ) is called Welter’s game and is denoted by 𝒲m . For ex-
ample, in 𝒲2 the options of (1, 2) are (0, 2) and (1, 0). Note that

m m
m
lg𝒲m (A) = ∑ ai − (1 + 2 + ⋅ ⋅ ⋅ + m − 1) = ∑ ai − ( ). (15.1)
i=1 i=1
2

For example, if A = (1, 3, 4), then

((1, 3, 4), (0, 3, 4), (0, 2, 4), (0, 1, 4), (0, 1, 3), (0, 1, 2))

has length 5 (= 1 + (3 − 1) + (4 − 2)).

We define disjunctive sums. For i ∈ { 1, 2 }, let Γi be an impartial game, and let


𝒫 = 𝒫Γi and ℰ i = ℰΓi . The disjunctive sum Γ1 + Γ2 of Γ1 and Γ2 is defined to be the
i

impartial game with position set 𝒫 1 × 𝒫 2 and edge set

{ ((A1 , A2 ), (B1 , A2 )) : (A1 , B1 ) ∈ ℰ 1 , A2 ∈ 𝒫 2 }


∪ { ((A1 , A2 ), (A1 , B2 )) : (A2 , B2 ) ∈ ℰ 2 , A1 ∈ 𝒫 1 } .

For example,

𝒩m = ⏟⏟⏟⏟⏟⏟⏟⏟⏟⏟⏟⏟⏟⏟⏟⏟⏟⏟⏟⏟⏟⏟⏟⏟⏟
𝒩1 + ⋅ ⋅ ⋅ + 𝒩1 .
m

Note that the disjunctive sum of two subtraction games is again a subtraction game.
i
Indeed, if 𝒫 i ⊆ ℕm and Γi = Γ(𝒫 i , C i ) for i ∈ { 1, 2 }, then Γ1 + Γ2 = Γ(𝒫 1 × 𝒫 2 , C ), where

m2
{
{ 1,1 }
1,m1 ⏞⏞⏞⏞⏞⏞⏞⏞⏞⏞⏞⏞⏞ 1,1 1,m1 1}
C = { (c , . . . , c , 0, . . . , 0) : (c , . . . , c ) ∈ C }
{ }
{ }
2 2
∪ { (0, . . . , 0, c2,1 , . . . , c2,m ) : (c2,1 , . . . , c2,m ) ∈ C 2 } .
⏟⏟⏟⏟⏟⏟⏟⏟⏟⏟⏟⏟⏟
m1
286 | Y. Irie

We define Sprague–Grundy functions. Let Γ be an impartial game. For A ∈ 𝒫Γ , the


Sprague–Grundy value sgΓ (A) of A is defined by

sgΓ (A) = mex { sgΓ (B) : (A, B) ∈ ℰΓ } ,

where mex S = min { α ∈ ℕ : α ∈ ̸ S }. The function sgΓ : 𝒫Γ → ℕ is called the Sprague–


Grundy function of Γ. An easy induction shows that the second player can force a win
if and only if the Sprague–Grundy value of the starting position equals 0.

Theorem 2.6 ([19, 5]). If Γ1 and Γ2 are impartial games, then

sgΓ1 +Γ2 = sgΓ1 ⊕2 sgΓ2 ,

that is, if Ai is a position in Γi for i ∈ { 1, 2 }, then

sgΓ1 +Γ2 (A1 , A2 ) = sgΓ1 (A1 ) ⊕2 sgΓ2 (A2 ).

Example 2.7. Since 𝒩m = 𝒩1 + ⋅ ⋅ ⋅ + 𝒩1 , it follows from Theorem 2.6 that

sg𝒩m (A) = a1 ⊕2 ⋅ ⋅ ⋅ ⊕2 am for A ∈ ℕm .

Theorem 2.8 ([21]). If A is a position in Welter’s game 𝒲m , then


i j
sg𝒲m (A) = a1 ⊕2 ⋅ ⋅ ⋅ ⊕2 am ⊕2 (⨁2 2ord2 (a −a )+1 − 1),
i<j

where ord2 (a) is the 2-adic order of a, that is,

max { L ∈ ℕ : 2L | a } if a ≠ 0,
ord2 (a) = {
∞ if a = 0.

Example 2.9. Let A be the position (7, 5, 3) in 𝒲3 . By Theorem 2.8 we see that

sg𝒲3 (A) = 7 ⊕2 5 ⊕2 3 ⊕2 (2ord2 (7−5)+1 − 1) ⊕2 (2ord2 (7−3)+1 − 1) ⊕2 (2ord2 (5−3)+1 − 1)


= 7 ⊕2 5 ⊕2 3 ⊕2 3 ⊕2 7 ⊕2 3 = 6.

3 p-calm subtraction games


We first recall p-saturations. We then give a necessary condition for Γ1 to satisfy (PN)
when Γ2 = 𝒩1 (Lemma 3.8) and define p-calm subtraction games. We next prove that
p-calm subtraction games satisfy (PN) and are closed under disjunctive sum (Theo-
rem 3.11). Finally, using the p-calmness of Welter’s game, we generalize a property of
Welter’s game (Proposition 3.23).
A base-p Sprague–Grundy-type theorem for p-calm subtraction games | 287

3.1 Notation
Fix an integer p greater than 1, and let Ω = { 0, 1, . . . , p − 1 }. For a, L ∈ ℕ, let aL denote
the Lth digit in the p-adic expansion of a. Then

a = ∑ aL pL , aL ∈ Ω.
L∈ℕ

We write a = [a0 , a1 , . . .](p) . If N ∈ ℕ and aL = 0 for L ≥ N, then we also write a =


[a0 , a1 , . . . , aN−1 ](p) . For example, if p = 2 and a = 14, then a = [0, 1, 1, 1, 0, 0, . . .](2) =
[0, 1, 1, 1](2) . For x, y ∈ Ω, we write x ⊕ y = x ⊕p y and x ⊖ y = x ⊖p y, where ⊖p is the
subtraction without borrowing in base p. For example, if p = 5, then 2 ⊖5 4 = 3, and

13 ⊖5 16 = [3, 2](5) ⊖5 [1, 3](5) = [3 ⊖ 1, 2 ⊖ 3](5) = [2, 4](5) = 22.

Before proceeding, we present a simple lemma.

Lemma 3.1. For i ∈ { 1, . . . , m }, let ai and bi be nonnegative integers. If ai ≡ bi (mod pN ),


then

(p)
∑ ai − bi ≡ ⨁p ai ⊖p bi ≡ [0, . . . , 0, ⨁ aiN ⊖ biN ] (mod pN+1 ). (15.2)
i i i

Proof. Since ai ≡ bi (mod pN ), it follows that

ai − bi ≡ ai ⊖p bi ≡ [0, . . . , 0, aiN ⊖ biN ] (mod pN+1 ).


(p)

Hence (15.2) holds.

3.2 p-saturations
We define p-saturations. For a ∈ ℕ, let ordp (a) denote the p-adic order of a, that is,

max { L ∈ ℕ : pL | a } if a ≠ 0,
ordp (a) = {
∞ if a = 0.

For example, ord2 (12) = ord2 ([0, 0, 1, 1](2) ) = 2 and ord3 (12) = ord3 ([0, 1, 1](3) ) = 1.
Define

(p) m i
Cm = { C ∈ ℕ \ { (0, . . . , 0) } : ordp (∑ c ) = mordp (C) } ,
i
288 | Y. Irie

where mordp (C) = min { ordp (ci ) : 1 ≤ i ≤ m }. For example,

(1, 0), (1, 3) ∈ C2(3) and (1, 2) ∈ ̸ C2(3)

because

ord3 (1 + 0) = 0 = min { ord3 (1), ord3 (0) } = min { 0, ∞ } ,


ord3 (1 + 3) = 0 = min { ord3 (1), ord3 (3) } = min { 0, 1 } , and
ord3 (1 + 2) = 1 > 0 = min { ord3 (1), ord3 (2) } = min { 0, 0 } .

Note that Cm1 ⊆ Cm(p) . Let Γ = Γ(𝒫 , C ) and Γ̃ = Γ(𝒫 , C ).


̃ The game Γ̃ is called a p-satura-
4
tion of Γ if it has the same Sprague–Grundy function as Γ(𝒫 , C ∪ Cm(p) ), that is,

sgΓ̃ (A) = sgΓ(𝒫,C ∪C (p) ) (A) (15.3)


m

for every A ∈ P. It is clear that Γ(𝒫 , C ∪ Cm(p) ) is a p-saturation of Γ. In this paper, we


will consider a subtraction game Γ(𝒫 , C ) satisfying
(∗) C ⊆ Cm(p) .

Note that if Γ satisfies (∗), then Γ̃ is a p-saturation of Γ if and only if sgΓ̃ = sgΓ(𝒫,C (p) ) .
m
Moreover, if two subtraction games Γ1 and Γ2 satisfy (*), then so does Γ1 + Γ2 .
It is known that we can obtain base-p versions of some games by using p-satura-
tions.

Example 3.2 ([6]). Let Γ̃ = Γ(ℕ2 , C2(3) ). Table 15.1 shows the Sprague–Grundy values of
some positions in Γ.̃ It is easy to see that sgΓ̃ (a, 0) = sgΓ̃ (0, a) = a. Since (1, 1) ∈ C2(3) , it
follows that (0, 0) is an option of (1, 1). Thus sgΓ̃ (1, 1) = 2. We also see that sgΓ̃ (1, 2) = 0
because (0, 0) is not an option of (1, 2).

Table 15.1: Some Sprague–Grundy values in Γ(ℕ2 , C2(3) ).

0 1 2 3
0 0 1 2 3
1 1 2 0 4
2 2 0 1 5
3 3 4 5 6

4 The definition of p-saturations is slightly generalized from that in [6].


A base-p Sprague–Grundy-type theorem for p-calm subtraction games | 289

In general, let Γ̃ be a p-saturation of Nim 𝒩m . If A is a position in Γ,̃ then

sgΓ̃ (A) = a1 ⊕p ⋅ ⋅ ⋅ ⊕p am . (15.4)

In other words, a p-saturation of Nim is a base-p version of Nim.

Remark 3.3. Note that 𝒩m is a 2-saturation of itself. This means that adding an edge
(A, B) with A − B ∈ Cm(2) to 𝒩m does not change its Sprague–Grundy function. Inciden-
tally, it is known that Γ(ℕm , C ) is a 2-saturation of 𝒩m if and only if Cm1 ⊆ C ⊆ Cm(2) [2].

Remark 3.4. Let A and B be two distinct elements of ℕm with ai ≥ bi for i ∈ { 1, . . . , m }.


Then A − B ∈ Cm(p) if and only if

⨁ aiN ⊖ biN ≠ 0, (15.5)


i

where N = mordp (A − B). Indeed, since ai ≡ bi (mod pN ), it follows from Lemma 3.1
that
(p)
∑ ai − bi ≡ [0, . . . , 0, ⨁ aiN ⊖ biN ] (mod pN+1 ).
i i

Therefore A − B ∈ Cm(p) if and only if (15.5) holds.

Theorem 3.5 ([7]). Let Γ̃ be a p-saturation of misère Nim ℳm . If A is a position in Γ,̃ then

sgΓ̃ (A) = a1 ⊕p ⋅ ⋅ ⋅ ⊕p am ⊕p (pmordp (A)+1 − 1). (15.6)

Theorem 3.6 ([6]). Let Γ̃ be a p-saturation of Welter’s game 𝒲m . If A is a position in Γ,̃


then
i j
sgΓ̃ (A) = a1 ⊕p ⋅ ⋅ ⋅ ⊕p am ⊕p (⨁p pordp (a −a )+1 − 1). (15.7)
i<j

In particular, 𝒲m is a 2-saturation of itself.

Example 3.7. Let Γ̃ be a 5-saturation of 𝒲3 , and let A be the position (7, 5, 3) in Γ.̃ It
follows from Theorem 3.6 that

sgΓ̃ (A) = 7 ⊕5 5 ⊕5 3 ⊕5 (5ord5 (7−5)+1 − 1) ⊕5 (5ord5 (7−3)+1 − 1) ⊕5 (5ord5 (5−3)+1 − 1)


= 7 ⊕5 5 ⊕5 3 ⊕5 4 ⊕5 4 ⊕5 4 = 12.

3.3 p-calm subtraction games


Let Γ1 be a subtraction game satisfying (∗). The next lemma gives a necessary condi-
tion for Γ1 to satisfy (PN) when Γ2 = 𝒩1 . We will show that this condition is sufficient
in Section 3.4.
290 | Y. Irie

Lemma 3.8. Let Γ1 be a subtraction game Γ(𝒫 , C ) with C ⊆ Cm(p) , let Γ = Γ1 + 𝒩1 , and let
Γ̃ be a p-saturation of Γ. Suppose that

sgΓ̃ (A, a) = sgΓ̃1 (A) ⊕p a (15.8)

for every position (A, a) in Γ.̃ If B is a proper descendant of A in Γ1 , then

sgΓ̃1 (A) − sgΓ̃1 (B) ≡ ∑ ai − bi (mod pN+1 ), (15.9)


i

where N = mordp (A − B), and Γ̃ 1 is a p-saturation of Γ1 .

Proof. We may assume that Γ̃ = Γ(𝒫 × ℕ, Cm+1 (p)


) and Γ̃ 1 = Γ(𝒫 , Cm(p) ) since C ⊆ Cm(p) .
Suppose that there are a position A and its proper descendant B not satisfying (15.9),
that is,

α − β ≢ ∑ ai − bi (mod pN+1 ), (15.10)


i

where α = sgΓ̃1 (A) and β = sgΓ̃1 (B). Then α ≠ β because if α = β, then by Lemma 3.1

(p)
[0, . . . , 0, ⨁ aiN ⊖ biN ] ≡ ∑ ai − bi ≢ α − β ≡ 0 (mod pN+1 ),
i i

so B is an option of A, which is impossible.


We prove that (15.8) does not hold. Consider the two positions (A, β⊖p α) and (B, 0).
Note that

sgΓ̃1 (A) ⊕p (β ⊖p α) = β and sgΓ̃1 (B) ⊕p 0 = β.

We show that (B, 0) is an option of (A, β ⊖p α) in Γ,̃ which will imply that (15.8) does not
hold. Let M = ordp (β ⊖p α).

Case 1 (M < N). We see that

mordp ((A, β ⊖p α) − (B, 0)) = min { mordp (A − B), ordp (β ⊖p α) } = min { N, M } = M.

Since ai − bi ≡ 0 (mod pN ), it follows that

((∑ ai − bi ) + (β ⊖p α − 0)) = βM ⊖ αM ≠ 0.
i M

Therefore (B, 0) is an option of (A, β ⊖p α) in Γ.̃

Case 2 (M ≥ N). Note that

mordp ((A, β ⊖p α) − (B, 0)) = min { N, M } = N.


A base-p Sprague–Grundy-type theorem for p-calm subtraction games | 291

By Lemma 3.1
(p)
∑ ai − bi ≡ [0, . . . , 0, ⨁ aiN ⊖ biN ] (mod pN+1 ) (15.11)
i i

and

α − β ≡ [0, . . . , 0, αN ⊖ βN ](p) (mod pN+1 ). (15.12)

By combining (15.10)–(15.12) we have

αN ⊖ βN ≠ ⨁ aiN ⊖ biN .
i

Hence

(⨁ aiN ⊖ biN ) ⊕ (βN ⊖ αN ) ≠ 0.


i

This implies that (B, 0) is an option of (A, β ⊖p α) in Γ.̃ Therefore (15.8) does not hold.
Let Γ be a subtraction game Γ(𝒫 , C ) with C ⊆ Cm(p) . The game Γ is said to be p-calm
if it satisfies (15.9), that is, for every position A and every proper descendant B of A,

sgΓ̃ (A) − sgΓ̃ (B) ≡ ∑ ai − bi (mod pN+1 ),


i

where Γ̃ is a p-saturation of Γ, and N = mordp (A − B).

Example 3.9. The game 𝒩m is p-calm. Indeed, let Γ̃ be a p-saturation of 𝒩m . Let A be


a position in Γ,̃ and let B be a proper descendant of A. Note that

⨁p ai ≡ ⨁p bi (mod pN ),
i i

where N = mordp (A − B). It follows from Example 3.2 and Lemma 3.1 that

sgΓ̃ (A) − sgΓ̃ (B) ≡ (⨁p ai ) − (⨁p bi )


i i

≡ (⨁p a ) ⊖p (⨁p bi ) ≡ ∑ ai − bi
i
(mod pN+1 ).
i i i

Thus 𝒩m is p-calm. We can also easily show that ℳm and 𝒲m are p-calm (see Sec-
tion 3.5).

Remark 3.10. There exist non-p-calm subtraction games. For example, let Γ be the
subtraction game Γ({ 0, p } , C11 ). It is clear that Γ is a p-saturation of itself. Since
sgΓ (0) = sgΓ ([0](p) ) = 0 and sgΓ (p) = sgΓ ([0, 1](p) ) = 1, it follows that Γ is not p-calm.
292 | Y. Irie

3.4 A base-p Sprague–Grundy-type theorem


The next theorem says that p-calm subtraction games satisfy (PN) and are closed un-
der disjunctive sum.

Theorem 3.11. For i ∈ { 1, . . . , k }, let Γi be a p-calm subtraction game. Then the disjunc-
tive sum Γ1 + ⋅ ⋅ ⋅ + Γk is p-calm. Moreover, if Γ̃ is a p-saturation of Γ1 + ⋅ ⋅ ⋅ + Γk and A is a
position in Γ,̃ then

sgΓ̃ (A) = sgΓ̃1 (A1 ) ⊕p ⋅ ⋅ ⋅ ⊕p sgΓ̃k (Ak ),

where A = (A1 , . . . , Ak ), and Γ̃ i is a p-saturation of Γi .

To prove Theorem 3.11, we use the following simple lemma.

Lemma 3.12. Let Γ be a p-calm subtraction game, and let ϕ be the Sprague–Grundy
function of its p-saturation. If A is a position in Γ and B is its proper descendant, then

(p)
ϕ(A) ⊖p ϕ(B) ≡ ϕ(A) − ϕ(B) ≡ [0, . . . , 0, ⨁ aiN ⊖ biN ] (mod pN+1 ), (15.13)
i

where N = mordp (A − B).

Proof. By Lemma 3.1

(p)
∑ ai − bi ≡ ⨁p ai ⊖p bi ≡ [0, . . . , 0, ⨁ aiN ⊖ biN ] (mod pN+1 ).
i i i

Since Γ is p-calm, it follows that

∑ ai − bi ≡ ϕ(A) − ϕ(B) (mod pN+1 ).


i

We show that ϕ(A) − ϕ(B) ≡ ϕ(A) ⊖p ϕ(B) (mod pN+1 ). Since

ϕ(A) − ϕ(B) ≡ ∑ ai − bi ≡ 0 (mod pN ),


i

we see that ϕ(A) ≡ ϕ(B) (mod pN ). It follows from Lemma 3.1 that

ϕ(A) − ϕ(B) ≡ ϕ(A) ⊖p ϕ(B) (mod pN+1 ).

Therefore (15.13) holds.

Proof of Theorem 3.11. For i ∈ { 1, . . . , k }, we may assume that Γ̃ i = Γ(𝒫 i , Cm(p)


i ) and Γ =
̃
Γ(𝒫 1 × ⋅ ⋅ ⋅ × 𝒫 k , Cm(p) ), where m = m1 + ⋅ ⋅ ⋅ + mk . Let ϕi (Ai ) = sgΓ̃i (Ai ) and ϕ(A) =
A base-p Sprague–Grundy-type theorem for p-calm subtraction games | 293

ϕ1 (A1 ) ⊕p ⋅ ⋅ ⋅ ⊕p ϕk (Ak ). To prove that sgΓ̃ (A) = ϕ(A), it suffices to show the following
two statements.
(SG1) If B is an option of A in Γ,̃ then ϕ(B) ≠ ϕ(A).
(SG2) If 0 ≤ β < ϕ(A), then ϕ(B) = β for some option B of A in Γ.̃

Let B be a proper descendant (B1 , . . . , Bk ) of A, and let

N = mordp (A − B) = min { ordp (ai,j − bi,j ) : 1 ≤ i ≤ k, 1 ≤ j ≤ mi } ,


i i
where Ai = (ai,1 , . . . , ai,m ) and Bi = (bi,1 , . . . , bi,m ). We first show that
(p)
i,j i,j
ϕ(A) − ϕ(B) ≡ ∑ ai,j − bi,j ≡ [0, . . . , 0, ⨁ aN ⊖ bN ] (mod pN+1 ). (15.14)
i,j i,j

Since ai,j ≡ bi,j (mod pN ), it follows from Lemma 3.1 that


(p)
i,j i,j
∑ ai,j − bi,j ≡ [0, . . . , 0, ⨁ aN ⊖ bN ] (mod pN+1 ).
i,j i,j

Now, since Γi is p-calm, it follows from Lemma 3.12 that


(p)
i,j i,j
ϕi (Ai ) − ϕi (Bi ) ≡ ϕi (Ai ) ⊖p ϕi (Bi ) ≡ [0, . . . , 0, ⨁ aN ⊖ bN ] (mod pN+1 ). (15.15)
j

Moreover, since ϕi (Ai ) ≡ ϕi (Bi ) (mod pN ), we see that ϕ(A) ≡ ϕ(B) (mod pN ). By
Lemma 3.1 and (15.15)

ϕ(A) − ϕ(B) ≡ ϕ(A) ⊖p ϕ(B)


(p)
i,j i,j
≡ ⨁ ϕi (Ai ) ⊖p ϕi (Bi ) ≡ [0, . . . , 0, ⨁ aN ⊖ bN ] (mod pN+1 ).
i i,j

Therefore (15.14) holds.


We now show (SG1). Let B be an option of A, and let N = mordp (A − B). Then

i,j i,j
⨁ aN ⊖ bN ≠ 0.
i,j

By (15.14), (SG1) holds.


We next show (SG2). Let αi = ϕi (Ai ) and α = α1 ⊕p ⋅ ⋅ ⋅ ⊕p αk (= ϕ(A)). Let β be an
integer with 0 ≤ β < α. We first construct a descendant B of A with ϕ(B) = β. When we
consider (α1 , . . . , αk ) as a position in a p-saturation of Nim, its Sprague–Grundy value
is equal to α as we have mentioned in Example 3.2. Therefore there exist β1 , . . . , βk ∈ ℕ
satisfying the following three conditions:
294 | Y. Irie

1. β i ≤ αi .
2. ordp (∑i αi − βi ) = min { ordp (αi − βi ) : 1 ≤ i ≤ k }.
3. β1 ⊕p ⋅ ⋅ ⋅ ⊕p βk = β.

If βi = αi , then let Bi = Ai . If βi < αi , then since αi = ϕi (Ai ) = sgΓ̃i (Ai ), we see that Ai has
an option Bi such that ϕi (Bi ) = βi in Γ̃ i . Let B = (B1 , . . . , Bk ). Then ϕ(B) = β1 ⊕p ⋅ ⋅ ⋅ ⊕p
βk = β.
We prove that B is an option of A. By (15.14) it suffices to show that βN ≠ αN , where
N = mordp (A − B). Let N i = mordp (Ai − Bi ). We first show that

N i = ordp (αi − βi ). (15.16)

Indeed, if αi = βi , then Ai = Bi , so (15.16) holds. Suppose that αi > βi . Since Bi is an


option of Ai and Γi is p-calm, it follows from Lemma 3.12 that

N i = ordp (∑ ai,j − bi,j )


j

= ordp (ϕi (Ai ) − ϕi (Bi )) = ordp (αi − βi ).

Hence (15.16) holds. We now prove that βN ≠ αN . By (15.16) and (ii)

N = min { N i : 1 ≤ i ≤ k } = min { ordp (αi − βi ) : 1 ≤ i ≤ k }

= ordp (∑ αi − βi ). (15.17)
i

Now since ordp (αi − βi ) = N i ≥ N, we see that αi ≡ βi (mod pN ). It follows from


Lemma 3.1 that

(∑ αi − βi ) = ⨁ αNi ⊖ βNi = αN ⊖ βN .
i N i

By (15.17) we see that αN ≠ βN , and so B is an option of A in Γ.̃ Therefore ϕ(A) = sgΓ̃ (A).
In particular, if A is a position and B is its proper descendant, then by (15.14)

sgΓ̃ (A) − sgΓ̃ (B) = ϕ(A) − ϕ(B) ≡ ∑ ai,j − bi,j (mod pN+1 ),
i,j

where N = mordp (A − B). Hence Γ1 + ⋅ ⋅ ⋅ + Γk is p-calm.

Corollary 3.13. If Γ is a subtraction game Γ(𝒫 , C ) with C ⊆ Cm(p) , then Γ and 𝒩1 satisfy
(PN) if and only if Γ is p-calm.

Remark 3.14. We can generalize Corollary 3.13 as follows. Let Γ1 be a subtraction


game Γ(𝒫 , C ) with C ⊆ Cm(p) , let Γ2 be a p-calm subtraction game, and let Γ̃ 2 be its
A base-p Sprague–Grundy-type theorem for p-calm subtraction games | 295

p-saturation. Suppose that there exists a position A2 ∈ 𝒫Γ̃2 such that sgΓ̃2 (A2 ) = α for
every α ∈ ℕ. Then Γ1 and Γ2 satisfy (PN) if and only if Γ1 is p-calm. We can prove this
by the same argument as in the proof of Lemma 3.8, so we only sketch it. Suppose that
Γ1 is not p-calm. Then there exist a position A1 and its proper descendant B1 such that

α − β ≢ ∑ a1,j − b1,j (mod pN+1 ), (15.18)


j

where α = sgΓ̃1 (A1 ), β = sgΓ̃1 (B1 ), Γ̃ 1 is a p-saturation of Γ1 , and N = mordp (A1 − B1 ).


By assumption, Γ̃ 2 has a position A2 such that sgΓ̃2 (A2 ) = β ⊖p α(> 0). This position A2
has an option B2 with sgΓ̃2 (B2 ) = 0. Let A = (A1 , A2 ) and B = (B1 , B2 ). It is sufficient to
show that B is an option of A in Γ,̃ where Γ̃ = Γ(𝒫Γ1 × 𝒫Γ2 , Cn(p) ). Let M = mordp (A2 − B2 )
(= ordp (∑j a2,j − b2,j )). Since Γ2 is p-calm, it follows from Lemma 3.12 that

∑ a2,j − b2,j ≡ β ⊖p α ≡ [0, . . . , 0, βM ⊖ αM ](p) (mod pM+1 ). (15.19)


j

Suppose that M < N. Then mordp (A − B) = M. Since (∑ ai,j − bi,j )M = (∑ a2,j − b2,j )M ≠ 0,
it follows that B is an option of A. Suppose that M ≥ N. Then mordp (A − B) = N. It
1,j 1,j
follows from (15.18) and (15.19) that αN ⊖ βN ≠ ⨁j aN ⊖ bN . Hence

1,j 1,j
(∑ ai,j − bi,j ) = (⨁ aN ⊖ bN ) ⊕ βN ⊖ αN ≠ 0.
i,j N j

This implies that B is an option of A. Therefore Γ1 and Γ2 do not satisfy (PN).

3.5 The p-calmness of Welter’s game


For a position A in Welter’s game, let ψ(p) (A) denote the right-hand side of equa-
tion (15.7) in Theorem 3.6, that is,

i j
ψ(p) (A) = a1 ⊕p ⋅ ⋅ ⋅ ⊕p am ⊕p (⨁p pordp (a −a )+1 − 1). (15.20)
i<j

Theorem 3.15. Welter’s game 𝒲m is p-calm. In particular, if Γ̃ is a p-saturation of 𝒲m1 +


⋅ ⋅ ⋅ + 𝒲mk and A is a position in Γ,̃ then

sgΓ̃ (A) = ψ(p) (A1 ) ⊕p ⋅ ⋅ ⋅ ⊕p ψ(p) (Ak ),

where A = (A1 , . . . , Ak ).
296 | Y. Irie

Proof. Let A be a position in 𝒲m , and let B be its proper descendant. Set N = mordp (A−
B). Then ai ≡ bi (mod pN ). In particular, ai − aj ≡ bi − bj (mod pN ). Since

p−1 if h ≡ 0 (mod pL ),
(pordp (h)+1 − 1)L = {
0 if h ≢ 0 (mod pL ),

i j i j
we see that (pordp (a −a )+1 − 1)L = (pordp (b −b )+1 − 1)L for 0 ≤ L ≤ N. Thus
i j i j
pordp (a −a )+1 − 1 ≡ pordp (b −b )+1 − 1 (mod pN+1 ).

It follows from (15.20) that


(p)
ψ(p) (A) − ψ(p) (B) ≡ [0, . . . , 0, ⨁ aiN ⊖ biN ] ≡ ∑ ai − bi (mod pN+1 ).
i i

Therefore Welter’s game is p-calm.

Example 3.16. Let Γ be a 5-saturation of 𝒲3 + 𝒲1 , and let A be the position ((7, 5, 3), (3))
in Γ. By Theorem 3.15

sgΓ (A) = ψ(5) (7, 5, 3) ⊕5 ψ(5) (3) = 12 ⊕5 3 = 10.

Remark 3.17. We can show the p-calmness of ℳm similarly. Indeed, let A be a position
in ℳm , and let B be a proper descendant of A. Since

p−1 if ai ≡ 0 (mod pL ) for every i,


(pmordp (A)+1 − 1)L = {
0 if ai ≢ 0 (mod pL ) for some i,

we see that pmordp (A)+1 − 1 ≡ pmordp (B)+1 − 1 (mod pN+1 ), where N = mordp (A − B). It
follows from Theorem 3.5 that ℳm is p-calm.

3.6 Full positions in Welter’s game


Recall that Theorem 3.6 provides a formula for the Sprague–Grundy function of
a p-saturation of Welter’s game. This theorem was proved using the next proposi-
tion. In this section, we generalize this proposition to disjunctive sums of Welter’s
games using Theorem 3.15.

Proposition 3.18 ([6]). Every position A in Welter’s game 𝒲m has a descendant B such
that lg𝒲m (B) = ψ(p) (B) = ψ(p) (A).

Example 3.19. Let A be the position (6, 4, 2) in 𝒲3 . Then

ψ(2) (A) = 6 ⊕2 4 ⊕2 2 ⊕2 (2ord2 (6−4)+1 − 1) ⊕2 (2ord2 (6−2)+1 − 1) ⊕2 (2ord2 (4−2)+1 − 1)


= 6 ⊕2 4 ⊕2 2 ⊕2 3 ⊕2 7 ⊕2 3 = 7.
A base-p Sprague–Grundy-type theorem for p-calm subtraction games | 297

Proposition 3.18 says that A has a descendant B such that lg𝒲3 (B) = ψ(2) (B) = 7. In-
deed, if B = (5, 3, 2), then

3
lg𝒲3 (B) = 5 + 3 + 2 − ( ) = 7,
2

and

ψ(2) (B) = 5 ⊕2 3 ⊕2 2 ⊕2 (2ord2 (5−3)+1 − 1) ⊕2 (2ord2 (5−2)+1 − 1) ⊕2 (2ord2 (3−2)+1 − 1)


= 5 ⊕2 3 ⊕2 2 ⊕2 3 ⊕2 1 ⊕2 1 = 7.

Remark 3.20. In general, if A is a position in an impartial game Γ, then

sgΓ (A) ≤ lgΓ (A). (15.21)

If sgΓ (A) = lgΓ (A), then A is said to be full. Consider the following condition on an
impartial game Γ:
(FD) Every position A in Γ has a full descendant B with the same Sprague–Grundy
value as A.

It is easy to show that a p-saturation of Nim satisfies condition (FD). It follows from
Proposition 3.18 and Theorem 3.6 that a p-saturation of Welter’s game also satis-
fies (FD). Our aim is to prove that a p-saturation of disjunctive sums of Welter’s games
satisfies (FD). To this end, we show two lemmas.

Lemma 3.21. Let A be a full position in an impartial game Γ. If lgΓ (A) > 0, then A has a
full option B with lgΓ (B) = lgΓ (A) − 1. In particular, if 0 ≤ β ≤ lgΓ (A) − 1, then A has a full
descendant B with lgΓ (B) = β.

Proof. Since A is full, it follows that sgΓ (A) = lgΓ (A) > 0. Hence A has an option B
with sgΓ (B) = sgΓ (A) − 1 = lgΓ (A) − 1. Since sgΓ (B) ≤ lgΓ (B) ≤ lgΓ (A) − 1, we see that
sgΓ (B) = lgΓ (B) = lgΓ (A) − 1. Thus B is the desired option of A.

Lemma 3.22. For i ∈ { 1, . . . , k }, let Γi be an impartial game satisfying (FD), and let Γ be
an impartial game with position set 𝒫Γ1 × ⋅ ⋅ ⋅ × 𝒫Γk . If for A = (A1 , . . . , Ak ) ∈ 𝒫Γ ,

sgΓ (A) = sgΓ1 (A1 ) ⊕p ⋅ ⋅ ⋅ ⊕p sgΓk (Ak ) (15.22)

and

lgΓ (A) = lgΓ1 (A1 ) + ⋅ ⋅ ⋅ + lgΓk (Ak ), (15.23)

then Γ satisfies (FD).


298 | Y. Irie

Proof. Let A be a position (A1 , . . . , Ak ) in Γ, and let αi = sgΓi (Ai ) and α = sgΓ (A) =
α1 ⊕p ⋅ ⋅ ⋅ ⊕p αk . By Lemma 3.21 it suffices to show that A has a full descendant B such
that sgΓ (B) ≥ α. Let

M = max { L ∈ ℕ : αL1 + ⋅ ⋅ ⋅ + αLk ≥ p } ,

where max 0 = −1. Define

β = (pM+1 − 1) + ∑ αL pL = [p − 1, . . . , p − 1, αM+1 , αM+2 , . . .](p) .


L≥M+1

Then β ≥ α. We will show that A has a full descendant B such that sgΓ (B) = β.
We first show that there exist β1 , . . . , βk ∈ ℕ satisfying the following two condi-
tions:
1. βi ≤ αi .
2. β1 + ⋅ ⋅ ⋅ + βk = β1 ⊕p ⋅ ⋅ ⋅ ⊕p βk = β.

If M = −1, then α1 , . . . , αk satisfy the two conditions. Suppose that M ≥ 0. For L ≥ M + 1,


let βLi = αLi . Since αM
1 k
+ ⋅ ⋅ ⋅ + αM 1
≥ p, there exist βM k
, . . . , βM such that

i i 1 k
βM ≤ αM and βM + ⋅ ⋅ ⋅ + βM = p − 1 = βM .

By rearranging αi if necessary, we may assume that βM 1 1


< αM . For L ≤ M −1, let βL1 = p−1
i i i i 1 k
and βL = 0 for i ≥ 2. Let β = [β0 , β1 , . . .] . Then β , . . . , β satisfy (i) and (ii).
(p)

Since Γi satisfies (FD), it follows from Lemma 3.21 that Ai has a full descendant Bi
such that sgΓi (Bi ) = βi . Let B = (B1 , . . . , Bk ). Then

sgΓ (B) = β1 ⊕p ⋅ ⋅ ⋅ ⊕p βk = β.

Since

β = β1 + ⋅ ⋅ ⋅ + βk = lgΓ1 (B1 ) + ⋅ ⋅ ⋅ + lgΓk (Bk ) = lgΓ (B),

it follows that B is the desired descendant of A.

Proposition 3.23. A p-saturation of a disjunctive sum of Welter’s games satisfies (FD).

Proof. For i ∈ { 1, . . . , k }, let Γi = 𝒲mi and Γ = Γ1 + ⋅ ⋅ ⋅ + Γk . Let Γ̃ be a p-saturation


of Γ. It is obvious that lgΓ̃ (A) = lgΓ̃1 (A1 ) + ⋅ ⋅ ⋅ + lgΓ̃k (Ak ), where Γ̃ i is a p-saturation of Γi .
Moreover, it follows from Theorem 3.15 that sgΓ̃ (A) = sgΓ̃1 (A1 ) ⊕p ⋅ ⋅ ⋅ ⊕p sgΓ̃k (Ak ). Since
Γ̃ 1 , . . . , Γ̃ k satisfy (FD), it follows from Lemma 3.22 that so does Γ.̃

Example 3.24. Let Γ = 𝒲3 + 𝒲2 , and let A be the position ((6, 5, 2), (3, 1)) in Γ. Note that
Γ is a 2-saturation of itself and

sgΓ (A) = ψ(2) (6, 5, 2) ⊕2 ψ(2) (3, 1) = 6 ⊕2 1 = 7.


A base-p Sprague–Grundy-type theorem for p-calm subtraction games | 299

By Proposition 3.23 the position A has a full descendant B such that lgΓ (B) =
ψ(2) (B) = 7. Indeed, if B = ((4, 3, 1), (3, 0)), then

sgΓ (B) = ψ(2) (4, 3, 1) ⊕2 ψ(2) (3, 0) = 5 ⊕2 2 = 7,

and

lgΓ (B) = lg𝒲3 (4, 3, 1) + lg𝒲2 (3, 0) = 5 + 2 = 7.

4 The p′ -component theorem


By rewriting Proposition 3.23 in terms of Young diagrams we generalize the p′ -
component theorem. First, we describe Welter’s game as a game with Young diagrams.
We then state the p′ -component theorem. Finally, we generalize this theorem by using
Proposition 3.23.
In this section, we assume that p is a prime.

4.1 Welter’s game and Young diagrams


For n ∈ ℕ, a partition λ of n is a tuple (λ1 , . . . , λm ) of positive integers such that ∑ λi = n
and λ1 ≥ ⋅ ⋅ ⋅ ≥ λm . For example, (4, 4, 1) is a partition of 9, and () is a partition of 0. If λ
is a partition (λ1 , . . . , λm ), then

{ (i, j) ∈ ℕ2 : 1 ≤ i ≤ m, 1 ≤ j ≤ λi }

is called the Young diagram or the Ferrers diagram corresponding to λ. We will identify
a partition with its Young diagram. A Young diagram Y can be visualized by using |Y|
cells. For example, Figure 15.2 shows the Young diagram (5, 4, 3).

Figure 15.2: The Young diagram (5, 4, 3) in English notation.

For a position A in 𝒲m , let

Y(A) = (aσ(1) − m + 1, aσ(2) − m + 2, . . . , aσ(m) ),

where σ is a permutation with aσ(1) > aσ(2) > ⋅ ⋅ ⋅ > aσ(m) . We consider Y(A) as a Young
diagram by ignoring zeros in Y(A). For example, if A = (3, 7, 5) and A′ = (5, 9, 7, 1, 0),
300 | Y. Irie

then

Y(A) = (7 − 2, 5 − 1, 3) = (5, 4, 3) and Y(A′ ) = (9 − 4, 7 − 3, 5 − 2, 1 − 1, 0) = (5, 4, 3, 0, 0),

so Y(A′ ) = Y(A) = . Note that the number of cells in Y(A) is equal to lg𝒲m (A)
since

lg𝒲m (A) = ∑ ai − (1 + ⋅ ⋅ ⋅ + m − 1) = 󵄨󵄨󵄨Y(A)󵄨󵄨󵄨.


󵄨 󵄨
i

In the above example,

lg𝒲3 (A) = 3 + 7 + 5 − (1 + 2) = (7 − 2) + (5 − 1) + 3 = 12 = 󵄨󵄨󵄨Y(A)󵄨󵄨󵄨.


󵄨 󵄨

It is known that moving in Welter’s game corresponds to removing a hook. Here,


for a cell (i, j) in a Young diagram Y, the (i, j)-hook Hij (Y) is defined by

Hi,j (Y) = { (i′ , j′ ) ∈ Y : (i′ ≥ i and j′ = j) or (i′ = i and j′ ≥ j) } .

In other words, Hi,j (Y) consists of the cells to the right of (i, j), the cells below (i, j), and
(i, j) itself. For example, Figure 15.3 shows the (1, 2)-hook of (5, 4, 3).

Figure 15.3: The (1, 2)-hook of (5, 4, 3).

We now describe removing a hook. We first remove cells in Hij (Y) from Y. If we get two
diagrams, then we next push them together. The obtained Young diagram is denoted
by Y \ Hij (Y) and is said to be obtained from Y by removing the (i, j)-hook. For example,
if Y = (5, 4, 3), then Y \ H1,2 (Y) = (3, 2, 1), and Y \ H3,2 (Y) = (5, 4, 1) (see Figure 15.4).

Figure 15.4: Removing the (1, 2)-hook and the (3, 2)-hook.

Moving in Welter’s game corresponds to removing a hook as follows. Let A be a posi-


tion in 𝒲m , and let B be its option. Then we can write

B = (a1 , . . . , as−1 , as − h, as+1 , . . . , am ).


A base-p Sprague–Grundy-type theorem for p-calm subtraction games | 301

Let as be the ith largest element in the components of A. Note that as − h ∈ ℕ \


{ a1 , . . . , am }. Suppose that as − h is the jth smallest element in ℕ \ { a1 , . . . , am }. Then
the following holds (see, for example, [13]):

Y(B) = Y(A) \ Hi,j (Y(A)). (15.24)

For example, consider moving from (7, 5, 3) to (1, 5, 3) in 𝒲3 . Since 7 is the largest
element in { 7, 5, 3 } and 1 is the second smallest element in ℕ \ { 7, 5, 3 }, it follows
from (15.24) that

Y(1, 5, 3) = Y(7, 5, 3) \ H1,2 (Y(7, 5, 3)).

Indeed, Y(1, 5, 3) is the Young diagram (3, 2, 1), and

Y(7, 5, 3) \ H1,2 (Y(7, 5, 3)) = (5, 4, 3) \ H1,2 (5, 4, 3) = (3, 2, 1)

as we have seen in Figure 15.4. In this way, moving in Welter’s game corresponds to
removing a hook. Note that it is obvious that |Y(A)| = lg𝒲m (A).
The number of cells in the (i, j)-hook is called the hook-length of (i, j). Figure 15.5
shows the hook-lengths of the Young diagram (5, 4, 3). Let H (Y) be the multiset of
hook lengths of Y. For example, H (5, 4, 3) = { 1, 1, 1, 2, 3, 3, 3, 4, 5, 5, 6, 7 }.

Figure 15.5: The hook lengths of (5, 4, 3).

Theorem 4.1 ([16, 17, 18]). If A is a position in Welter’s game 𝒲m , then

sg𝒲m (A) = ⨁2 N(2) (h),


h∈H (Y)

ord (h) L
where Y = Y(A) and N(2) (h) = ∑L=02 2 = [ 1, . . . , 1 ](2) .
⏟⏟⏟⏟⏟⏟⏟⏟⏟⏟⏟
ord2 (h)+1

Example 4.2. If A = (7, 5, 3), then

sg𝒲3 (A) = ⨁2 N(2) (h)


h∈H (Y(A))

= N(2) (1) ⊕2 N(2) (1) ⊕2 N(2) (1) ⊕2 N(2) (3) ⊕2 N(2) (3) ⊕2 N(2) (3)
⊕2 N(2) (5) ⊕2 N(2) (5) ⊕2 N(2) (7)
⊕2 N(2) (2) ⊕2 N(2) (6) ⊕2 N(2) (4)
302 | Y. Irie

= 1 ⊕2 1 ⊕2 1 ⊕2 1 ⊕2 1 ⊕2 1 ⊕2 1 ⊕2 1 ⊕2 1 ⊕2 3 ⊕2 3 ⊕2 7
= 6.

From Theorem 3.6 we obtain the following analogue of Theorem 4.1.5

Theorem 4.3 ([6]). Let Γ be a p-saturation of 𝒲m . If A is a position in Γ, then

sgΓ (A) = ⨁p N(p) (h), (15.25)


h∈H (Y)

ord (h)
where Y = Y(A) and N(p) (h) = ∑L=0p pL = [ ⏟⏟⏟⏟⏟⏟⏟⏟⏟⏟⏟
1, . . . , 1 ](p) .
ordp (h)+1

Example 4.4. Let Γ be a 5-saturation of 𝒲3 , and let A be the position (7, 5, 3) in Γ. By


Theorem 3.6 we see that

sgΓ (A) = ⨁5 N(5) (h)


h∈H (Y(A))

= N(5) (1) ⊕5 N(5) (1) ⊕5 N(5) (1) ⊕5 N(5) (2) ⊕5 N(5) (3)
⊕5 N(5) (3) ⊕5 N(5) (3) ⊕5 N(5) (4) ⊕5 N(5) (6) ⊕5 N(5) (7)
⊕5 N(5) (5) ⊕5 N(5) (5)
= N(5) (5) ⊕5 N(5) (5)
= 6 ⊕5 6
= 12.

4.2 The p′ -component theorem


We describe the p′ -component theorem in terms of Young tableaux.
Let Y be a Young diagram. A Young tableau T of shape Y is a bijection from Y to
{ 1, 2, . . . , |Y| }. A Young tableau T of shape Y is called a standard tableau if T(i, j) ≤
T(i′ , j′ ) for (i, j), (i′ , j′ ) ∈ Y with i ≤ i′ and j ≤ j′ . We can visualize a Young tableau by
writing numbers in cells. Figure 15.6 shows an example of a standard tableau.

Figure 15.6: A standard tableau of shape (5, 4, 3).

5 Theorem 4.3 holds for an integer p greater than 1.


A base-p Sprague–Grundy-type theorem for p-calm subtraction games | 303

Let f Y be the number of standard tableaux of shape Y. We can calculate f Y by using


the hook lengths of Y.

Theorem 4.5 (Hook formula [4]). If Y is a Young diagram with n cells, then

n!
fY = .
∏h∈H (Y) h

Example 4.6. Let Y = (2, 1). By Theorem 4.5

3!
fY = = 2.
12 ⋅ 3

Indeed, there are exactly two standard tableaux of shape Y (see Figure 15.7).

Figure 15.7: Two standard tableaux of shape (2, 1).

Remark 4.7. The above formula is important in combinatorial representation theory.


Let Sym(n) denote the symmetric group on { 1, 2, . . . , n }. A representation ρ of Sym(n) is
a group homomorphism from Sym(n) to the general linear group GL(d, ℂ), the group
of invertible d × d matrices with entries in ℂ. The number d is called the degree of ρ. It
is known that the Young diagrams with n cells are in one-to-one correspondence with
the irreducible6 representations of Sym(n). For a Young diagram Y with n cells, let ρY
denote the irreducible representation of Sym(n) corresponding to Y. Then the degree
of ρY is equal to f Y . In addition to degrees, a lot of results about representations of
symmetric groups can be approached in a purely combinatorial way. See, for example,
[15] for details.

Using f Y , we restate Proposition 3.18. MacDonald [9] characterized Y such that f Y


is prime to p.7 Irie [6] observed that it follows from MacDonald’s characterization that
f Y is prime to p if and only if ψ(p) (Y) = |Y|, where ψ(p) (Y) is the right-hand side of
equation (15.25). Recall that

ψ(p) (Y(A)) = sgΓ̃ (A) and 󵄨󵄨󵄨Y(A)󵄨󵄨󵄨 = lgΓ̃ (A)


󵄨 󵄨

for a position A in a p-saturation Γ̃ of 𝒲m . Hence A is full if and only if f Y(A) is prime


to p. Therefore the following result follows from Proposition 3.18.

6 Roughly speaking, an irreducible representation is kind of an atom among representations because


every representation can be decomposed into a direct sum of irreducible representations.
7 MacDonald’s characterization holds only when p is prime, so we have to assume that p is prime.
304 | Y. Irie

Theorem 4.8 ([6]). Every Young diagram Y includes a Young diagram Z with ψ(p) (Y)
cells such that f Z is prime to p.

Proof. Let Y = (λ1 , λ2 , . . . , λm ), and let A be the position (λ1 + m − 1, λ2 + m − 2, . . . , λm )


in a p-saturation Γ̃ of 𝒲m . Then Y(A) = Y. By Proposition 3.18 the position A has a full
descendant B with the same Sprague–Grundy value as A, that is, lgΓ̃ (B) = ψ(p) (B) =
ψ(p) (A). Since moving in Welter’s game corresponds to removing a hook, we see that
Y(B) ⊆ Y(A). Moreover,

(p) (p) (p)


󵄨󵄨Y(B)󵄨󵄨󵄨 = lgΓ̃ (B) = ψ (B) = ψ (A) = ψ (Y(A)).
󵄨󵄨 󵄨

Therefore Y(B) is the desired Young diagram.

Example 4.9. Let p = 2, and let Y be the Young diagram . By Theorem 4.5

9!
fY = = 168.
13 ⋅ 2 ⋅ 32 ⋅ 4 ⋅ 5 ⋅ 6

Note that f Y is even. Now Y corresponds to the position (6, 4, 2) in 𝒲3 . Since ψ(2) (Y) =
sg𝒲3 (6, 4, 2) = 7, Theorem 4.8 says that Y includes a Young diagram Z with 7 cells such
that f Z is odd. Indeed, as we have seen in Example 3.19, the position (5, 3, 2) is a full
descendant of (6, 4, 2), and these two positions have the same Sprague–Grundy value.
Let Z = Y(5, 3, 2) = . Then Z ⊂ Y, and

|Z| = 3 + 2 + 2 = 7 = ψ(2) (Y).

Moreover,

7!
fZ = = 21.
12 ⋅ 22 ⋅ 3 ⋅ 4 ⋅ 5

In particular, f Z is odd.

Remark 4.10. Theorem 4.8 has the following algebraic interpretation. Let Y be a
Young diagram with n cells. Recall that the corresponding irreducible representation
ρY is a map from Sym(n) to GL(d, ℂ). Since Sym(n − 1) ⊆ Sym(n), we can obtain a rep-
resentation of Sym(n − 1) by restricting ρY to Sym(n − 1). The obtained representation
ρY |Sym(n−1) may not be irreducible and can be decomposed as follows:

ρY 󵄨󵄨󵄨Sym(n−1) = ⨁ ρY ,

(15.26)
󵄨
Y−

where the direct sum runs over all Young diagrams Y − obtained from Y by removing
a hook of length 1. For example,

ρ 󵄨󵄨󵄨Sym(2) = ρ ⊕ ρ and ρ 󵄨󵄨Sym(8) = ρ ⊕ρ ⊕ρ


󵄨 󵄨󵄨
.
A base-p Sprague–Grundy-type theorem for p-calm subtraction games | 305

From equation (15.26) and Theorem 4.8 we see that the restriction ρY to Sym(ψ(p) (Y))
has a component with degree prime to p. Thus we will call Theorem 4.8 the p′ -
component theorem. For example, if Y = (4, 3, 2), then

ρY |Sym(ψ(2) (Y)) = ρ
󵄨󵄨 󵄨󵄨 󵄨
󵄨󵄨Sym(7) = (ρ 󵄨󵄨Sym(8) )󵄨󵄨󵄨Sym(7)

⊕ρ ⊕ρ
󵄨
= (ρ )󵄨󵄨󵄨Sym(7)

= (ρ ⊕ρ ) ⊕ (ρ ⊕ρ ) ⊕ (ρ ⊕ρ ⊕ρ ).

Since

deg ρ =f = 21,

the representation ρ is a component of ρY |Sym(ψ(2) (Y)) with odd degree.

As we have mentioned, by using the p′ -component theorem we can show that


the Sprague–Grundy function of a p-saturation of Welter’s game is equal to ψ(p) . We
generalize the p′ -component theorem in the next section.

4.3 A generalization of the p′ -component theorem


Let Y be a k-tuple (Y 1 , . . . , Y k ) of Young diagrams. A Young tableau T of shape Y is a
bijection from the disjoint union ⨆l Y l to { 1, 2, . . . |Y | }, where |Y | = ∑l |Y l |. A Young
tableau T of shape Y is called a standard tableau if T(i, j) ≤ T(i′ , j′ ) for (i, j), (i′ , j′ ) ∈ Y l
and l ∈ { 1, . . . , k } with i ≤ i′ and j ≤ j′ . Figure 15.8 shows an example of a standard
tableau.

Figure 15.8: A standard tableau of shape ((4, 4, 2), (2, 1)).

Let f Y denote the number of standard tableaux of shape Y . It follows from Theorem 4.5
that

n! n1 ! nk ! n!
fY = ⋅ ⋅ ⋅ = ,
n1 ! ⋅ ⋅ ⋅ nk ! ∏h∈H (Y 1 ) h ∏h∈H (Y k ) h ∏h∈H (Y ) h

where n = |Y |, nl = |Y l |, and H (Y ) is the multiset addition H (Y 1 ) + ⋅ ⋅ ⋅ + H (Y k ).


For example, { 1, 2 } + { 1, 3 } = { 1, 1, 2, 3 }. Moreover, by using Olsson’s generalization
306 | Y. Irie

[12] of MacDonald’s characterization, we can show that f Y is prime to p if and only if


ψ(p) (Y ) = |Y |, where

ψ(p) (Y ) = ψ(p) (Y 1 ) ⊕p ⋅ ⋅ ⋅ ⊕p ψ(p) (Y k )


= ⨁p N(p) (h).
h∈H (Y )

Let Y and Z be k-tuples (Y 1 , . . . , Y k ) and (Z 1 , . . . , Z k ) of Young diagrams, respec-


tively. We write Z ⊆ Y if Z l ⊆ Y l for l ∈ { 1, . . . , k }. The following theorem follows from
Proposition 3.23.

Theorem 4.11. Every k-tuple Y of Young diagrams includes a k-tuple Z of Young dia-
grams with ψ(p) (Y ) cells in total such that f Z is prime to p.
l l
Proof. Let Y = (Y 1 , . . . , Y k ), Y l = (λl,1 , . . . , λl,m ), Al = (λl,1 + ml − 1, λl,2 + ml − 2, . . . , λl,m ),
and A = (A1 , . . . , Ak ). Then Y = (Y(A1 ), . . . , Y(Ak )). We consider A as a position in Γ,̃
where Γ̃ is a p-saturation of 𝒲m1 +⋅ ⋅ ⋅+ 𝒲mk . By Proposition 3.23 the position A has a full
descendant B with the same Sprague–Grundy value as A, that is, lgΓ̃ (B) = ψ(p) (B) =
ψ(p) (A). Let Z = (Y(B1 ), . . . , Y(Bk )). We see that Z ⊆ Y . Moreover,

|Z| = lgΓ̃ (B) = ψ(p) (B) = ψ(p) (A) = ψ(p) (Y ).

Therefore Z is the desired k-tuple of Young diagrams.

Example 4.12. Let p = 2 and Y = ((4, 4, 2), (2, 1)) = ( , ). Then

H (Y ) = { 1, 1, 1, 1, 2, 2, 2, 3, 3, 4, 5, 5, 6 }

(see Figure 15.9). Hence

13!
f( , )
= = 144144.
14 ⋅ 23 ⋅ 32 ⋅ 4 ⋅ 52 ⋅ 6
Moreover,

ψ(2) ( , ) = ψ(2) ( ) ⊕2 ψ(2) ( ) = 6 ⊕2 1 = 7.

By Theorem 4.11, Y includes a pair Z of Young diagrams with 7 cells in total such that
f Z is odd. Indeed, let Z = ((2, 2, 1), (2)). We see that

7!
f( , )
= = 105.
13 ⋅ 22 ⋅ 3 ⋅ 4

Thus Z is the desired pair of Young diagrams.

Remark 4.13. We give an algebraic interpretation of Theorem 4.8. It is known that


k-tuples of Young diagrams with n cells in total are in one-to-one correspondence with
A base-p Sprague–Grundy-type theorem for p-calm subtraction games | 307

Figure 15.9: The hook lengths of ((4, 4, 2), (2, 1)) and ((2, 2, 1), (2)).

the irreducible representations of the generalized symmetric group (ℤ/kℤ) ≀ Sym(n)


(see, for example, [14, 20] for details). For a k-tuple Y of Young diagrams with n cells in
total, let ρY denote the corresponding irreducible representation of (ℤ/kℤ) ≀ Sym(n).
Then the degree of ρY is equal to f Y . Moreover, the following analogue of equa-
tion (15.26) holds:

ρY 󵄨󵄨󵄨(ℤ/kℤ)≀Sym(n−1) = ⨁ ρY ,

(15.27)
󵄨
Y−

where the direct sum runs over all k-tuples Y − of Young diagrams obtained from Y by
removing a hook of length 1. For example,

ρ( , ) 󵄨󵄨
󵄨󵄨(ℤ/2ℤ)≀Sym(12) = ρ
( , )
⊕ ρ( , )
⊕ ρ( , )
⊕ ρ( , )
.

Let Y and Z be two k-tuples of Young diagrams such that Z ⊆ Y . By (15.27) the rep-
resentation ρZ is a component of the restriction of ρY to (ℤ/kℤ) ≀ Sym(|Z|). Therefore
Theorem 4.11 says that the restriction ρY to (ℤ/kℤ) ≀ Sym(ψ(p) (Y )) has a component
with degree prime to p.

Bibliography
[1] E. R. Berlekamp, J. H. Conway, and R. K. Guy, Winning Ways for Your Mathematical Plays, vol. 1,
2nd ed., A. K. Peters, Natick, MA, 2001.
[2] U. Blass, A. S. Fraenkel, and R. Guelman, How far can Nim in disguise be stretched?, J. Combin.
Theory Ser. A 84(2) (1998), 145–156.
[3] J. H. Conway, On Numbers and Games, 2nd ed., A. K. Peters, Natick, MA, 2001.
[4] J. S. Frame, G. de B. Robinson, and R. M. Thrall, The hook graphs of the symmetric group,
Canad. J. Math. 6 (1954), 316–324.
[5] P. M. Grundy, Mathematics and games, Eureka 2 (1939), 6–8.
[6] Y. Irie, p-Saturations of Welter’s game and the irreducible representations of symmetric groups,
J. Algebraic Combin. 48 (2018), 247–287.
[7] Y. Irie. The Sprague-Grundy functions of saturations of misère Nim, Electron. J. Combin. 28(1)
(2021) #P1.58.
[8] S.-Y. R. Li, N-person Nim and N-person Moore’s games, Internat. J. Game Theory 7(1) (1978),
31–36.
[9] I. G. MacDonald, On the degrees of the irreducible representations of symmetric groups, Bull.
Lond. Math. Soc. 3(2) (1971), 189–192.
[10] E. H. Moore, A generalization of the game called Nim, Ann. of Math. 11(3) (1910), 93–94.
308 | Y. Irie

[11] G. Navarro, Character Theory and the McKay Conjecture, Cambridge Studies in Advanced
Mathematics, Cambridge University Press, Cambridge, 2018.
[12] J. B. Olsson, McKay numbers and heights of characters, Math. Scand. 38 (1976), 25–42.
[13] J. B. Olsson, Combinatorics and Representations of Finite Groups, Vorlesungen aus dem
Fachbereich Mathematik der Universität GH Essen 20, 1993.
[14] M. Osima, On the representations of the generalized symmetric group, Math. J. Okayama Univ.
4(1) (1954), 39–56.
[15] B. E. Sagan, The Symmetric Group: Representations, Combinatorial Algorithms, and Symmetric
Functions, Graduate Texts in Mathematics, 203, 2nd ed., Springer-Verlag, New York, NY, 2001.
[16] M. Sato, On a game (Notes by K. Ueno in Japanese), in Proceedings of the 12th Symposium of
the Algebra Section of the Mathematical Society of Japan (1968), 123–136.
[17] M. Sato, Mathematical theory of Maya game (Notes by H. Enomoto in Japanese), RIMS
Kôkyûroku 98 (1970), 105–135.
[18] M. Sato, On Maya game (Notes by H. Enomoto in Japanese), Sugaku no Ayumi 15(1) (1970),
73–84.
[19] R. P. Sprague, Über mathematische Kampfspiele, Tohoku Math. J. 41 (1935), 438–444.
[20] J. R. Stembridge, On the eigenvalues of representations of reflection groups and wreath
products, Pacific J. Math. 140(2) (1989), 353–396.
[21] C. P. Welter, The theory of a class of games on a sequence of squares, in terms of the advancing
operation in a special group, Indag. Math. (Proceedings) 57 (1954), 194–200.
Urban Larsson, Rebecca Milley, Richard Nowakowski,
Gabriel Renault, and Carlos Santos
Recursive comparison tests for dicot and
dead-ending games under misère play
Abstract: In partizan games, where players Left and Right may have different options,
there is a partial order defined as preference by Left: G ⩾ H if Left wins G + X when-
ever she wins H + X for any game position X. In normal play, there is an easy test for
comparison: G ⩾ H if and only if Left wins G − H playing second. In misère play, where
the last player to move loses, the same test does not apply—for one thing, there are
no additive inverses—and very few games are comparable. If we restrict the arbitrary
game X to a subset of games 𝒰 , then we may have G ⩾ H “modulo 𝒰 ”; but without the
easy test from normal play, we must give a general argument about the outcomes of
G + X and H + X for all X ∈ 𝒰 . In this paper, we use the novel theory of absolute com-
binatorial games to develop recursive comparison tests for the well-studied universes
of dicots and dead-ending games. This is the first constructive test for comparison of
dead-ending games under misère play using a new family of end-games called perfect
murders.

1 Introduction
The purpose of this paper is to develop recursive comparison tests for certain classes of
misère-play games. We assume that the reader is familiar with the theory of normal-

Acknowledgement: Urban Larsson was supported in part by the Aly Kaufman Fellowship.
Rebecca Milley was supported in part by the Natural Sciences and Engineering Research Council of
Canada.
Richard Nowakowski was supported in part by the Natural Sciences and Engineering Research Council
of Canada.
Gabriel Renault was supported by the ANR-14-CE25-0006 project of the French National Research
Agency.
Carlos Santos was partially funded by Fundação para a Ciência e a Tecnologia through the project
UID/MAT/04721/2013.

Urban Larsson, Ind. Engineering and Management, Technion, Israel Institute of Technology, Haifa,
Israel, e-mail: urban031@gmail.com
Rebecca Milley, Computational Mathematics, Grenfell Campus, Memorial University, Corner Brook,
Canada, e-mail: rmilley@grenfell.mun.ca
Richard Nowakowski, Gabriel Renault, Dept of Mathematics and Statistics, Dalhousie University,
Halifax, Canada, e-mails: rjn@mathstat.dal.ca, gabriel.renault@ens-lyon.org
Carlos Santos, Center for Functional Analysis, Linear Structures and Applications, University of
Lisbon, Lisbon, Portugal, e-mail: cmfsantos@fc.ul.pt

https://doi.org/10.1515/9783110755411-016
310 | U. Larsson et al.

play combinatorial games, including partizan game outcomes, disjunctive sum and
negation, and equality and inequality (see Section 2 for a brief review).
In combinatorial games, comparability is a critical relation. Domination and re-
versibility both rely on comparisons of options to simplify games. In normal play, it is
straightforward to check that two games G and H are comparable:

G ⩾ H ⇔ G − H ⩾ 0 ⇔ Left wins G − H playing second.

However, in misère play—where the first player unable to move is the winner in-
stead of the loser—this simple test does not apply. Games do not have additive inverses
under general misère play, so H +(−H) ≠ 0. In fact, no nonzero game is equal to zero in
misère play [11]; in normal play, all previous-win positions are zero. There is a modified
hand-tying principle1 for misère play, but nontrivial comparisons are rare in general
misère play [16].
If play is restricted to a specific set or universe of games 𝒰 , then G and H may be
comparable “modulo 𝒰 ”, even if they are not in general. This is restricted misère play
[14, 15]. However, without the easy test from normal play, finding instances of misère
comparison in restricted play requires a proof of a universal statement:

G ⩾𝒰 H if for all X ∈ 𝒰 , o(G + X) ⩾ o(H + X);

i. e., Left wins G + X whenever she wins H + X for arbitrary X ∈ 𝒰 .


As an example, consider the following inequality for the game Domineering, in
which Left places vertical dominoes and Right places horizontal [3]:

+ ⩾

In normal play, it is easy to see that this inequality is true; we simply show that Left
wins playing second on this sum (where negation is achieved by rotating a board by
90 degrees):

+ +

Intuitively, it seems that the same should be true in misère play (perhaps modulo
a suitable restricted universe): Right should be happier to play on the unbroken 2 × 3

1 In misère play, if the Left options of H are a nonempty subset of the Left options of G (or if both
are empty), and the Right options of G are a nonempty subset of the Right options of H (or both are
empty), then trivially G ⩾ H [16].
Recursive comparison tests for dicot and dead-ending games under misère play | 311

game, since he has more freedom to move and Left’s options are the same either way.
Without further tools, to prove this in misere play, we would have to prove that for
arbitrary X, if Left wins + X, then she also can win + + X.
In this paper, we present recursive comparison tests for restricted play in two
well-studied universes of games. In the summary of our paper in Section 4, we will
easily prove the domineering comparison above, and will do so in a way that can
be implemented algorithmically. This work is an application of results from abso-
lute game theory [8, 9] to the universes of dicots, 𝒟 and dead-ending games, ℰ . The
new comparison tests establish G ⩾𝒟 H or G ⩾ℰ H based only on outcomes of G
and H and comparisons among their options. The most significant contributions of
this paper are the introduction of strong outcome as a necessary condition for G ⩾ℰ H
and the introduction of perfect murder games as a way to directly calculate strong
outcome.
Section 2 reviews terminology and notation and defines the universes of 𝒟 and ℰ .
Section 3 proves the main results of the paper. Section 3.1 gives necessary conditions
for comparability in 𝒟 and ℰ . Section 3.2 proves that with one additional stipulation,
these conditions are sufficient for comparability in 𝒟. Finally, Section 3.3 proves that
a strengthening of that additional stipulation is necessary and sufficient for compara-
bility in ℰ . A moderate amount of new theory is developed in Section 3.3 to establish
this main result.

2 Definitions
In this section, we review standard definitions from normal-play combinatorial games
in the context of misère play and then define restricted misère play and the universes
of dicots and dead-ending games.

2.1 Options and outcomes


We use standard notation to represent a combinatorial game: G = {Gℒ | Gℛ } is defined
by its set of Left and Right options, Gℒ and Gℛ , respectively, where GL ∈ Gℒ (GR ∈ Gℛ )
is a typical Left (Right) option of G. The zero game is the game with no options for either
player: 0 = {⋅ | ⋅}. The followers of a game are the game itself, its options, the options
of its options, and so on; that is, followers of G are G and all other positions that can
be obtained from playing in G.
Because games do not have additive inverses under general misère play, we use G
instead of −G to denote the conjugate of G (the negative in normal play): G = {Gℛ | Gℒ },
where Gℒ = {GL : GL ∈ Gℒ }.
In this paper, we often need to discuss who wins “when Left plays first” and who
wins “when Right plays first”; we use the terms left-outcome and right-outcome for this
312 | U. Larsson et al.

purpose. Under misère play, they are defined as follows, with L >R:

L if Gℒ = 0,
oL (G) = {
max oR (GL ) otherwise,

R if Gℛ = 0,
oR (G) = {
min oL (GR ) otherwise;

that is, oL (G) = L if and only if Left wins G playing first, and so on. The overall outcome
can then be defined by the pair of left-outcome and right-outcome:

{
{ L if (oL (G), oR (G)) = (L, L),
{
{
{
{N if (oL (G), oR (G)) = (L, R),
o(G) = {
{
{
{P if (oL (G), oR (G)) = (R, L),
{
{
{R if (oL (G), oR (G)) = (R, R).

The total order L > R induces the standard partial order on the outcomes: L >
N > R and L > P > R , whereas N and P are incomparable. In this paper, we
use o(G) to mean the outcome under misère play. If necessary, we may use o− (G) to
distinguish the misère outcome from the normal outcome o+ (G).
In a disjunctive sum of games the current player chooses exactly one of the game
components and plays in it, whereas the other components remain the same:

G + H = {Gℒ + H, G + H ℒ | Gℛ + H, G + H ℛ },

where Gℒ + H = {GL + H : GL ∈ Gℒ }, etc. Comparison is then formally defined by

G⩾H if for all X, o(G + X) ⩾ o(H + X).

Two games G and H are equal if G ⩾ H and H ⩾ G, that is, if o(G + X) = o(H + X)
for all games X. In other words, games are equal if they can be interchanged in any
sum without affecting the outcome. Equality and inequality are dependent upon the
ending condition; we assume misère play in this paper but will use ⩾+ and ⩾− to dis-
tinguish normal from misère, respectively, when needed.

2.2 Universes and restricted misère play


Since analysis of games under misère play is difficult, recent work has focused on
studying well-suited restrictions of the set of all games (see [13] for a survey). The re-
strictions relevant for this work are the classes of dicot and dead-ending games.
A game is a dicot if for each follower, either no player can move, or both players
can move. These games are called “all-small” in normal play.
Recursive comparison tests for dicot and dead-ending games under misère play | 313

Informally, a game is dead-ending if once a player runs out of moves in a game,


that player will never again have a move in that game. Many well-studied rulesets,
including Domineering and Hackenbush (see [3] for descriptions), have the dead-
ending property. To define this formally, we need to talk about ends: games for which
at least one player has no options. In normal-play, ends are simple: every end is an
integer. As noted in [16], ends in misère play are very problematic, whereas in isola-
tion, Left is happy to have no options, when played in a disjunctive sum, she may be
unhappy to be forced to play elsewhere.
A game G is a left-end if it has no Left option, and a game is a dead left-end if each
follower is also a left-end. Thus, in a dead left-end, Left has no immediate moves, and
there is nothing Right can do to “open up” moves for Left. Define right-end and dead
right-end analogously. A game is then dead-ending if each left-end follower is a dead
left-end and each right-end follower is a dead right-end.
The set of all dicot games is denoted by 𝒟, and the set of all dead-ending games is
denoted by ℰ . Note that 𝒟 is the restriction of ℰ , where the only end is the zero game,
0 = {⋅ | ⋅}. Both 𝒟 and ℰ satisfy some important closure properties, which classifies
each as a universe of games.

Definition 1. A universe 𝒰 is a nonempty set of games that satisfies the following prop-
erties:
1. option closure: if G ∈ 𝒰 and G′ is an option of G, then G′ ∈ 𝒰 ;
2. disjunctive sum closure: if G, H ∈ 𝒰 , then G + H ∈ 𝒰 ;
3. conjugate closure: if G ∈ 𝒰 , then G ∈ 𝒰 .

Restricted misère game theory uses weakened definitions of equality and inequal-
ity to study games modulo a universe 𝒰 :

G ≡𝒰 H if for all X ∈ 𝒰 , o(G + X) = o(H + X)


G ⩾𝒰 H if for all X ∈ 𝒰 , o(G + X) ⩾ o(H + X).

The idea is that we may get equality and comparability, and perhaps reductions
and invertibility, “modulo 𝒰 ”, even if the relations do not hold in general or do not
hold in a larger universe.
For example, the game ∗ = {0 | 0} is invertible modulo 𝒟 [1] but not in ℰ [12]:
∗ + ∗ ≡𝒟 0, but ∗ + ∗ ≡ℰ̸ 0. The games 1 = {0 | ⋅} and 1 = {⋅ | 0} are additive inverses
modulo ℰ (and thus also mod 𝒟): 1 + 1 ≡ℰ 0, but this is not true in general unrestricted
misère play.
The universes of dicots and dead-ending games have proven fruitful for misère
analysis (see [13]), and we continue the development of that theory by introducing
comparison tests for 𝒟 and ℰ .
Recently, [8] introduced absolute combinatorial game theory, a general theory for
combinatorial games under nonspecified ending condition. The theory applies to ab-
314 | U. Larsson et al.

solute universes, which are defined by the parental property: for all nonempty sets of
games S, T ⊂ 𝒰 , if G is a game with Gℒ = S and Gℛ = T, then G is also in 𝒰 . Impar-
tial games are not parental; for example, 0 and ∗ are in the impartial universe, but
{0 | ∗} is not. Dicots are parental: if each player has a nonempty set of dicot games
as options, then the game is a dicot. Likewise, dead-ending games are parental. Our
comparison tests for dicots and dead-ending games are specific adaptations of results
from absolute game theory.

3 Recursive comparison
In this section, we develop our main results. In Section 3.1, we review a result from [8]
that gives necessary and sufficient conditions for G ⩾ H in any absolute universe. In
that paper the conditions are named the Proviso and the Maintenance property. The
Maintenance property is confirmed recursively, but in general the Proviso is not. In
Section 3.2, we show how the Proviso reduces to o(G) ⩾ o(H) when the universe is the
set of all dicots 𝒟, and this gives an entirely constructive comparison test for dicots.
The same idea is implicit in [2], where it is stated in terms of the down-linked relation
[16], now generalized by the Maintenance property.
In Section 3.3 we present our main, original results: we show that for the dead-
ending universe, ℰ , the Proviso reduces to a consideration of specific end games which
we call perfect murders. As in 𝒟, the result is a completely recursive comparison test
for G ⩾ℰ H.

3.1 Proviso and maintenance


Theorem 1 states a major result from absolute game theory [8], which gives necessary
and sufficient conditions (dependant on the ending condition) for G ⩾𝒰 H in an abso-
lute universe 𝒰 . The proof (not included here) is highly nontrivial and uses the adjoint
operation, down-linked relation, and other concepts from Siegel, Ettinger et al. [16, 6].
We will apply the result to the universes of dicots and dead-ending games under mis-
ère play.

Theorem 1 (Proviso and Maintenance [8]). Let 𝒰 be an absolute universe, and let
G, H ∈ 𝒰 . Then G ⩾𝒰 H if and only if the following hold:
Proviso:
1. oL (G + X) ⩾ oL (H + X) for all Left ends X ∈ 𝒰 ;
2. oR (G + X) ⩾ oR (H + X) for all Right ends X ∈ 𝒰 .

Maintenance:
1. ∀H L ∈ H ℒ , ∃GL ∈ Gℒ : GL ⩾𝒰 H L or ∃H LR ∈ H Lℛ : G ⩾𝒰 H LR ; and
2. ∀GR ∈ Gℛ , ∃H R ∈ H ℛ : GR ⩾𝒰 H R or ∃GRL ∈ GRℒ : GRL ⩾𝒰 H.
Recursive comparison tests for dicot and dead-ending games under misère play | 315

The idea behind the Maintenance property is that in an absolute universe when
playing G + H with G ⩾𝒰 H, Left can always “maintain” her position: no matter what
move Right makes in G + H, Left can bring the game back to a position that is just
as good for Left as before Right moved. In some sense, this is a generalization of the
fact that in normal play, G ⩾ H ⇒ G + H ⩾ 0 (recall that we do not have this in
misère play, because H + H is generally not equal to 0). Note that a Right move in H is
a Left move in H, and so the conditions in Theorem 1 are stated without reference to
conjugates.
Note that all inequality relations in this section are considered with misère-play
ending condition, unless otherwise specified; if necessary, we use ⩾− for misère play
and ⩾+ for normal play.
Incidentally, Theorem 1 implies the existence of an order-preserving map of
misère-play into normal-play, that is, if G ⩾−𝒰 H, then G ⩾+ H. This result is al-
ready known [2, 8], but we give the argument in Corollary 1 below to illustrate how it
follows from Theorem 1. We can gain some intuition for the idea by considering the
game {n | −n} for a large integer n. Using Chess terminology, this game acts like a
“large zugzwang” under misère play: players do not want to be the first to play in this
position, and so in both G + {n | −n} and H + {n | −n}, players should play G and H
with a “normal-play strategy”, trying to get the last move and force the opponent to
play first on the zugzwang part. Thus if we do not have G ⩾+ H, then we cannot have
G ⩾−𝒰 H.

Corollary 1. For any absolute universe 𝒰 , if G ⩾−𝒰 H, then G ⩾+ H.

Proof. Suppose you have G ⩾−𝒰 H, and so you also have conditions (1) and (2) from
Theorem 1. We need to show that Left wins second on G + H under normal play. Right’s
options are of the form GR + H or G + H L ; in each case, the Maintenance property
guarantees the existence of a response for Left. Moreover, that response always brings
the game to another position of the form X + Y where X ⩾−𝒰 Y. Thus, in all followers of
G + H, Left will always be able to reply to any Right move, and so Left will win G + H
under normal play.

3.2 The proviso for dicots


In the universe of dicots 𝒟 the only end is the zero game. Thus the Proviso in 𝒟 reduces
to “Left wins G first if Left wins H first, and Right wins H first if Right wins G first.” This
is precisely equivalent to o(G) ⩾ o(H), where these functions indicate misère outcome.
Thus we have the recursive comparison test for 𝒟 stated below. As we omitted the proof
of Theorem 1, we include here a partial proof of the result specifically for 𝒟.

Theorem 2 (Comparison in 𝒟). Let G, H ∈ 𝒟. Then G ⩾𝒟 H if and only if


1. o(G) ⩾ o(H);
316 | U. Larsson et al.

2. ∀H L ∈ H ℒ , ∃GL ∈ Gℒ : GL ⩾𝒟 H L or ∃H LR ∈ H Lℛ : G ⩾𝒟 H LR ;
3. ∀GR ∈ Gℛ , ∃H R ∈ H ℛ : GR ⩾𝒟 H R or ∃GRL ∈ GRℒ : GRL ⩾𝒟 H.

Proof. (⇒) It is trivial that G ⩾𝒟 H implies condition 1. Since 𝒟 is absolute, G ⩾𝒟 H


implies conditions 2 and 3 by Theorem 1 (Maintenance).
(⇐) Assume that G, H ∈ 𝒟 and conditions 1, 2, and 3 are satisfied. We need to show
o(G + X) ⩾ o(H + X) for all X ∈ 𝒟. We proceed by induction. For the base case where
X = 0, we need only o(G) ⩾ o(H); this is given by condition 1.
Let X be any game of rank greater than 0, and assume that o(G +X ′ ) ⩾ o(H +X ′ ) for
all X ′ of smaller rank than X. To show that o(G + X) ⩾ o(H + X), we show that when Left
wins H + X going first (second), Left also wins G + X going first (second). So suppose
Left wins H + X going first. Since X ≠ 0 and X is a dicot, we know that Left has a move
in H + X.
If Left wins H + X with a move to H + X L , then Left wins G + X with a move to G + X L
by the induction hypothesis. Otherwise, Left wins H + X with a move to H L + X. By
condition 2 there is either a GL ⩾𝒟 H L , in which case Left wins G + X with GL + X, or
there is an H LR ⩽𝒟 G. However, H LR + X is left-win or next-win, because H L + X is a
good first left move, so this means that G + X is left-win or next-win, as required.
The argument for Left playing second is similar, using condition 3 instead of con-
dition 2.

3.3 The proviso for dead-ending games


Recall that the universe of dead-ending games ℰ is a superset of the dicots. In 𝒟 the
only end is 0; in ℰ , there are nonzero ends, but they must be dead ends: for example,
a left end in ℰ has no options for left now and no options for left later. The comparison
test for dicots is not quite strong enough to give inequality modulo ℰ , even if both
games are dicots.
To illustrate the complication, consider G = {∗ | ∗} = ∗ + ∗ and H = 0. It is easy
to check that all three conditions of Theorem 2 are satisfied, so that G ⩾𝒟 H (in fact,
G ≡𝒟 H by symmetry). However, it is not true that G ⩾ℰ H. Consider the logic for
dicots. If Left can win 0 + X, then Left can follow the same strategy to win ∗ + ∗ + X:
if Right plays in ∗ + ∗, Left can respond to bring that component to zero and resume
winning on X; otherwise, Left ignores ∗ + ∗ and eventually runs out of moves in X, at
which point X must be 0, and then Left wins playing next on ∗ + ∗.
The problem for dead-ending games is the conclusion “at which point X must
be 0”. In ℰ , when Left runs out of moves in X, the position is a left end, but not neces-
sarily 0. In our example above, suppose Right has a single move remaining in X when
Left runs out of moves in X; now Left moves in one of the stars to leave ∗ + 1, and Right
wins from here moving to ∗. So ∗ + ∗ ⩾ℰ̸ 0 (because Left prefers 0 + 1 over ∗ + ∗ + 1).
Recursive comparison tests for dicot and dead-ending games under misère play | 317

For dead-ending games, the base case will be that X is any left end, not necessarily
0; this brings us back to the Proviso:
1. oL (G + X) ⩾ oL (H + X) for all Left ends X ∈ 𝒰 ;
2. oR (G + X) ⩾ oR (H + X) for all Right ends X ∈ 𝒰 ;

Our goal is to find a sufficient constructive condition for the Proviso in ℰ .


We begin by introducing new notation for the worst possible outcome (for left) of
“G + X for any left end”.

Definition 2. The strong Left-outcome and strong Right-outcome of G ∈ ℰ is

ô L (G) = ⏟⏟⏟⏟⏟⏟⏟ {oL (G + X)},


min
Left-end X

ô R (G) = max
⏟⏟⏟⏟⏟⏟⏟ {oR (G + Y)},
Right-end Y

respectively.

The strong Left- or Right-outcome can be determined by a general argument. For


example, if G = {0, ∗ | 0}, then the strong left-outcome is L because Left wins first on
(G+ any nonzero left end X) by playing to 0 + X, and left wins first on G + 0 by playing
to ∗.
However, we claim there is a direct constructive way to compute strong outcome
and thereby verify the Proviso in ℰ . For this, we introduce a new family of dead ends,
and we will prove that these are the “worst” left-ends for Left and the worst right-ends
for Right. Consider a left-end over which Right has total control: Right can terminate
the end at any point (i. e., Right always has a move to zero). We call this type of game
a perfect murder. In a sum of G and a perfect murder, it is as if Right uses the murder
game to “pass” in G, until it is advantageous for Right to terminate that end and, if
possible, force Left into the last move in G.

Definition 3. The perfect murder of rank n, Mn ∈ ℰ , is recursively defined by

0 if n = 0,
Mn = {
{⋅ | 0, Mn−1 } if n > 0.

Thus M0 = 0, M1 = { | 0}, M2 = { | 0, { | 0}}, and so on. Figure 16.1 shows perfect


murders of rank up to 4.
We aim to prove that perfect murder games can be used to easily determine the
strong outcome of a given game. This is done across three results (stated here from
Left’s perspective): Lemma 1 shows that Left prefers Mn to Mn+1 ; Lemma 2 shows that
Left prefers any left end to Mn , provided that the rank is at most n; and finally, Theo-
rem 3 shows that the strong left-outcome of G with rank k is precisely the smaller of
oL (G) and oL (G + Mk−1 ).
318 | U. Larsson et al.

M0 M1 M2 M3 M4

Figure 16.1: Perfect murder games of rank 0 to 4.

Lemma 1. For all n > 0, Mn ⩾ℰ Mn+1 .

Proof. We need to show that o(Mn + X) ⩾ o(Mn+1 + X) for all n > 0 and X ∈ ℰ . Let n > 0
and suppose Right wins Mn + X (first or second or both). We need to show that Right
can win Mn+1 + X.
Right’s winning move in Mn + X must be to 0 + X ′ for a follower X ′ of X (in which
Left moves to a Right end). Say this move to 0 occurs at level k of Mn . Then Right can
win Mn+1 + X by following exactly the same strategy, moving to 0 + X ′ at level k of
Mn+1 .

Lemma 2. If G is a Left-end of rank k > 0, then G ⩾ℰ Mn for all n ⩾ k.

Proof. Let G ∈ ℰ be a fixed Left-end of rank k > 0. By Lemma 1 it suffices to show that
G ⩾ℰ Mk .
Let X be an arbitrary game in ℰ . We must prove that o(G + X) ⩾ o(Mk + X). If X = 0,
then o(G + X) = L = o(Mk + X), because both G and Mk are nonzero left-ends. Now
assume that rank(X) > 0.
Suppose Left wins Mk + X going first. Then Left’s good first move is to Mk + X L . By
induction, G + X L is at least as good as this move, so Left can also win G + X going first.
Suppose Left wins Mk + X going second; so all Right moves in Mk + X are left- or
next-win. Consider Left playing second in G + X. There are three possibilities:
(1) If there is a Right option GR = 0, then since this is also an option of Mk , Left wins
0 + X.
(2) If Right moves to GR + X, with GR ≠ 0, then by induction GR ⩾ℰ Mk−1 . Since Left
wins from Mk−1 + X, Left also wins from GR + X.
(3) If Right moves to G+X R , then by induction this is at least as good for Left as Mk +X R ,
which Left wins.

Recall that the strong left-outcome ô L of a game G is the minimum left-outcome


(L or R with L > R) of G + X, where X ranges over all possible left ends in ℰ . Lemma 2 es-
tablishes that perfect murders are the worst (nonzero) left ends for Left. It then makes
sense that when calculating the strong left-outcome, we need only consider the out-
Recursive comparison tests for dicot and dead-ending games under misère play | 319

come of G + a perfect murder left-end, and also G + 0. Theorem 3 shows that Mk−1 will
yield the minimum outcome of G with a nonzero left end.

Theorem 3. Let G ∈ ℰ . If rank(G) = k, then

ô L (G) = min{oL (G), oL (G + Mk−1 )},


ô R (G) = max{oR (G), oR (G + Mk−1 )}.

Proof. We prove the result for ô L , and for ô R , it follows analogously. If oL (G + 0) = R,


then the result is clear. Otherwise, let X be a nonzero left end such that oL (G + X) is a
minimum.
We need to show that oL (G + Mk−1 ) is not greater than oL (G + X); that is, oL (G +
X) ⩾ oL (G + Mk−1 ). If rank(X) ⩽ k − 1, then this follows by Lemma 2. So assume that
rank(X) ⩾ k. Let X ′ be the game X “trimmed” to rank k − 1 (that is, X ′ is obtained
from the game tree of X by deleting all nodes of levels larger than or equal to k). Then
by Lemma 2, X ′ ⩾ℰ Mk−1 , and so oL (G + X ′ ) ⩾ oL (G + Mk−1 ). We will now show that
oL (G + X) ⩾ oL (G + X ′ ) by showing that oL (G + X ′ ) = L ⇒ oL (G + X) = L.
Suppose oL (G+X ′ ) = L. To win G+X playing first, Left can follow the same strategy
as in G + X ′ . The only way Right can foil this is if Right makes a move in G + X that was
not possible in G + X ′ , but that move would be in level k or higher in X, and at that
point, Left has made k moves in G, and so there are no moves remaining in G (which
has rank k) and no left moves in X. So Left wins.
Thus oL (G + X) ⩾ oL (G + X ′ ) ⩾ oL (G + Mk−1 ) for any nonzero left end X, and so
the minimum left-outcome of G with any left-end is the minimum of oL (G + Mk−1 ) and
oL (G + 0).

We now have a constructive way to compute the strong left-outcome and strong
right-outcome. We can pair the two outcomes to give the strong outcome of the game.

Definition 4. The strong outcome of G ∈ ℰ is

{
{ L if (ô L (G), ô R (G)) = (L, L),
{
{
{
{N if (ô L (G), ô R (G)) = (L, R),
o(G)
̂ ={
{
{
{ P if (ô L (G), ô R (G)) = (R, L),
{
{
{R if (ô L (G), ô R (G)) = (R, R).

Observation 4. Let E be a nonzero dead Left-end. Then o(E)


̂ = L , because if Left goes
first, then she has no move and wins, and if Right goes first, then the position is still
a Left-end (because the original position was a dead end), and so Left also wins going
second.

With the concept of strong outcome, we now have a recursive comparison test for
dead-ending games.
320 | U. Larsson et al.

Theorem 5 (Comparison in ℰ ). Let G, H ∈ ℰ . Then G ⩾ℰ H if and only if


1. o(G)
̂ ⩾ o(H);
̂
2. For all H L ∈ H ℒ , ∃GL ∈ Gℒ : GL ⩾ℰ H L or ∃H LR ∈ H Lℛ : G ⩾ℰ H LR ;
3. For all GR ∈ Gℛ , ∃H R ∈ H ℛ : GR ⩾ℰ H R or ∃GRL ∈ GRℒ : GRL ⩾ℰ H.

Proof. (⇒) Since ℰ is absolute, G ⩾ℰ H implies conditions 2 and 3 by Theorem 1 (Main-


tenance). Also, G ⩾ℰ H implies condition 1; if not, then (without loss of generality)
ô L (G) = R and ô L (H) = L, so there is a Left-end X such that oL (G + X) = R, but for X
(and all Left-ends), oL (H + X) = L; this contradicts G ⩾ℰ H.
(⇐) Assume that G, H ∈ ℰ and conditions 1, 2, and 3 are satisfied. We need to show
that o(G + X) ⩾ o(H + X) for all X ∈ ℰ . We use induction on the “left-rank” (the depth
of the game tree beginning with a Left move) of X. In fact, the proof is identical to that
of Theorem 2, except for the base case. Here the base case is where X is a left end. We
will prove that conditions 1, 2, and 3 together imply o(G + X) ⩾ o(H + X) for all left ends
X ∈ ℰ.
Assume that o(G)̂ ⩾ o(H)
̂ and let X be any left end in ℰ . We need to show o(G +X) ⩾
o(H + X). Suppose Left wins H + X moving first (moving second follows analogously).
If this is because H is also a left end, then o(H)
̂ = L ⇒ o(G) ̂ ∈ L ⇒ oL (G + Y) = L for
all left ends Y, so Left wins G + X going first.
If H is not a left end, then Left wins H + X with a move to H L + X. By condition 2,
either (1) there is a GL ⩾ℰ H L , or (2) there is an H LR ⩽ℰ G. If (1), then since Left wins
H + X with H L + X, we know that Left will win G + X with GL + X; if (2), since Left wins
moving next on H LR + X (as it is a right option of H L + X), then we know that Left wins
moving next on G + X.

We end with a few examples of Theorem 5, including the Domineering inequality


from the introduction.

Example 6. Consider G = {−1 | 1}. Then ô L (G) = L and ô R (G) = R. Therefore o(G)
̂ =
R RL
N = o(0).
̂ Note that G = 1 and G = 0 ⩾ℰ 0, so Theorem 5 gives G ⩾ℰ 0. Symmetri-
cally, G ⩽ℰ 0. Hence G ≡ℰ 0.

Example 7. Recall the conjectured Domineering inequality from Section 1:

+ ⩾ℰ

This follows easily by Theorem 5, since


(1) the strong outcome of both sides is P ,
(2) all Left options of are also Left options of + , and
(3) the single Right option of + is + , which is trivially greater than or equal to
the Right option of by hand-tying.
Recursive comparison tests for dicot and dead-ending games under misère play | 321

4 Summary and future directions


We have adapted general results from absolute CGT [8] to develop recursive compar-
ison tests for restricted misère play in the dicot and dead-ending universes 𝒟 and ℰ .
The main construction for ℰ is the perfect murder family of games, which function
as the “worst” nonzero ends for a given player. Perfect murder positions are used to
directly calculate the strong outcome of a game.
The recursive comparison tests can be used to help analyze specific rule sets
within the universes of 𝒟 and ℰ . Many commonly studied rulesets, including all
placement games [7], are dead-ending. Example 14 illustrates how our comparison
test for ℰ can be used to find results for the game of Domineering (other similar
inequalities for Domineering are presented in [4]).
Since conditions (2) and (3) of Theorem 5 are recursive and since strong outcome
can be calculated directly using perfect murder games, our comparison tests can be
implemented computationally. In [5] a computer program has been written to apply
Theorem 5 to determine all nontrivial comparisons of rank-2 dead-ending games.
A direction for future work is to find and study other parental universes, so that we
can use absolute game theory to develop analogous comparison tests for those games.
We can consider universes between 𝒟 and ℰ by considering the closure (under sums,
conjugates, and parentality) of 𝒟 union some other non-dicot dead-ending position(s)
[10]. We can also consider extensions of ℰ ; are there parental universes between ℰ
and the full universe? Would something stronger than strong outcome be sufficient
for comparison in such a universe? The answers to these questions will allows us to
continue to develop the theory of restricted misère play.

Bibliography
[1] M. R. Allen, Peeking at partizan misère quotients, in R. J. Nowakowski (Ed.) Games of No Chance
4, MSRI Publ. 63 (2015), 1–12.
[2] P. Dorbec, G. Renault, A. N. Siegel, and E. Sopena, Dicots, and a taxonomic ranking for misère
games, J. Combin. Theory Ser. A 130 (2015), 42–63.
[3] E. R. Berlekamp, J. H. Conway, and R. K. Guy, Winning Ways for Your Mathematical Plays, Vol. 2,
A. K. Peters Ltd., MA, 2001.
[4] A. Dwyer, R. Milley, and M. Willette, Misère domineering on 2 × n boards, Integers, 21B (2021),
#A10.
[5] A. Dwyer, R. Milley, and M. Willette, Dead-ending day-2 games under misere play,
Undergraduate Thesis, Grenfell Campus – Memorial University, 2020.
[6] J. M. Ettinger, Topics in combinatorial games, PhD Thesis, University of Wisconsin–Madison,
1996.
[7] S. Huntemann, The class of strong placement games: complexes, values, and temperature, PhD
Thesis, Dalhousie University, 2018.
[8] U. Larsson, R. J. Nowakowski, and C. P. Santos, Absolute combinatorial game theory, in S.
Huntemann and U. Larrson (Eds.) Games of No Chance 6, MSRI Publ. (2022), to appear.
322 | U. Larsson et al.

[9] U. Larsson, R. J. Nowakowski, and C. P. Santos, Game comparison through play, Theoret.
Comput. Sci. 725 (2018), 52–63.
[10] U. Larsson, R. J. Nowakowski, and C. P. Santos, manuscript.
[11] G. A. Mesdal and P. Ottaway, Simplification of partizan games in misère play, Integers 7 (2007),
#G6.
[12] R. Milley and G. Renault, Dead ends in misère play: the misère monoid of canonical numbers,
Discrete Math. 313 (2013), 2223–2231.
[13] R. Milley and G. Renault, Restricted developments in partizan misère game theory, in U.
Larsson (Ed.) Games of No Chance 5, MSRI Publ. 70 (2018).
[14] T. E. Plambeck, Taming the wild in impartial combinatorial games, Integers 5 (2005), #G5.
[15] T. E. Plambeck and A. N. Siegel, Misère quotients for impartial games, J. Combin. Theory Ser. A
115 (2008), 593–622.
[16] A. N. Siegel, Misère canonical forms of partizan games, in R. J. Nowakowski (Ed.) Games of No
Chance 4, MSRI Publ. 63 (2015).
Urban Larsson, Richard J. Nowakowski, and Carlos P. Santos
Impartial games with entailing moves
Abstract: Combinatorial game theory has also been called “additive game theory”
whenever the analysis involves sums of independent game components. Such dis-
junctive sums invoke comparison between games, which allows abstract values to be
assigned to them. However, there are rulesets with entailing moves that break the al-
ternating play axiom and/or restrict the other player’s options within the disjunctive
sum components. These situations are exemplified in the literature by a ruleset such
as nimstring, a normal play variation of the classical children’s game dots & boxes,
and top entails, an elegant ruleset introduced in the classical work Winning Ways by
Berlekamp, Conway, and Guy. Such rulesets fall outside the scope of the established
normal play theory. Here we axiomatize normal play via two new terminating games,
∞ (Left wins) and ∞ (Right wins), and achieve a more general theory. We define affine
impartial, which extends classical impartial games, and we analyze their algebra by
extending the established Sprague–Grundy theory with an accompanying minimum
excluded rule. Solutions of nimstring and top entails are given to illustrate the the-
ory.

1 Introduction
Combinatorial game theory (CGT), as described in [1, 3, 4, 7], considers disjunctive
sums of normal play games. To evaluate the outcome of a sum of such games, it suffices
to analyze the components individually and then add the individual values.
However, some classical impartial rulesets, such as nimstring and top entails,
fall slightly outside the usual CGT axioms. In nimstring, certain moves require a
player to play again, or carry-on, which is a violation of the alternating play axiom.
In top entails, certain moves enforce the next player to play in the same component,
which violates the standard definition of a disjunctive sum. Thus the values of individ-
ual components are no longer a relevant measure, given the standard CGT axioms. The

Acknowledgement: Carlos P. Santos was partially Supported by FCT – Fundação para a Ciência e Tec-
nologia, under the project UIDB/04721/2020.

Urban Larsson, School of Computing, National University of Singapore, Singapore, Singapore,


e-mail: urban031@gmail.com
Richard J. Nowakowski, Department of Mathematics and Statistics, Dalhousie University, Halifax,
Canada, e-mail: r.nowakowski@dal.ca
Carlos P. Santos, CEAFEL, University of Lisbon & ISEL–IPL, Lisbon, Portugal, e-mail:
carlos.santos@isel.pt

https://doi.org/10.1515/9783110755411-017
324 | U. Larsson et al.

type of moves mentioned in this paragraph will be gathered under the term entailing
moves.1
The purpose of this paper is to extend impartial normal play games sufficiently to
include games with entailing moves. While accomplishing this, we expand the classi-
cal Sprague–Grundy theory to fit this extension.
We will rebuild the normal play axioms by using so-called terminating games, or
infinities, ∞ and ∞. Here we focus on the impartial setting, and the general compre-
hensive theory for partizan games will appear in [5].2 These theories are called affine
impartial and affine normal play, respectively.
Although we consider only impartial games in this paper, we will keep the players
distinguished as Left and Right. In particular, Left wins if either player plays to ∞ in
any component, and Right wins in case of play to ∞. Note that the normal play zero
is restored by defining 0 = {∞ | ∞}, a first player losing position.
It is well known that in classical combinatorial game theory the impartial values
are nimbers. We will prove that there is exactly one more value modulo affine impar-
tial, a game K, called the moon. This value was anticipated in the classical work Win-
ning Ways by Berlekamp, Conway, and Guy. In [3, vol. 2, p. 398], we can read “A loony
move is one that loses for a player, no matter what other components are.” Before de-
veloping the theory, we illustrate how the infinities are used in the motivating rulesets,
nimstring [2, 3] and top entails [3, vol. 2].
Let us first briefly mention the organization of the paper. To facilitate the devel-
opment of the new impartial theory, Section 2 considers the basic properties of unre-
stricted affine normal play, aiming for a game comparison result, Theorem 7. The affine
impartial theory is developed in Section 3. The main result is Theorem 17, which shows
that values in this extension are the nimbers plus one more value. Theorem 20 gives
an algorithm to find the value of a given position, and notably, if there are no infini-
ties in the options, then the nimbers are obtained by the usual mex-rule. We finish off
with two case studies. In Section 4, we compute the value of an interesting nimstring
position, anticipated in Section 1.1. In Section 5, we compute the values for top en-
tail heaps of sizes 1 through 12, and Theorem 21 provides theoretical justification for
computing top entails values.

1.1 The ruleset nimstring


In nimstring a player draws a line between two horizontally or vertically adjacent
points in a finite grid, not already joined by a line. If a player completes a 1 × 1 square,
then they must draw another line, and if they cannot do so, they lose.

1 Entailing means “involve something as a necessary or inevitable part or consequence”.


2 There are partizan rulesets, in the literature and in recreational play, with similar entailing and
terminating moves. Probably the most prominent ones are the game of chess and a version of go
called atari go.
Impartial games with entailing moves | 325

Figure 17.1 shows an example position, where no square can be completed in the
next move. Later, through the new theory, we will see that the position H equals ∗2
modulo affine impartial.

Figure 17.1: A nimstring position H.

In Figure 17.2, we show a position G with two options, one of which is an entailing
move. Namely, if the top bar is drawn, then the next player continues, but if the middle
bar is drawn, then the current player has to carry-on.

Figure 17.2: A nimstring position G with its two options, a “double-box” and an entailing carry-on
position.

When we develop the theory, we will see that the position G, to the left in Figure 17.2,
is the abstract game

{{∞ | 0}, 0 | {0 | ∞}, 0}. (17.1)

The option 0 is obtained by drawing the top bar. The intuition for this is as follows:
if a player can win X, then he can also win “double box”+X because the player who
moves inside the “double box” has to play again. Due to that, “double box” is neutral
in all disjunctive sums, including the case X = { | }.
If a player draws the middle bar in G, then they have to carry-on, and this is rep-
resented by the abstract option {∞ | 0} if Left moved. There is an infinite urgency in
this game: Right has to play here or lose. So the effect is the desired: Left plays again,
and alternating play is restored. Hence disjunctive sum play is also restored within
the affine impartial convention. Moreover, the Right option in this threat should be 0,
because Left loses by playing this option if G is played alone. If the sum is G + H with
H as in Figure 17.1, then the next player wins by playing this entailing middle bar in G.
326 | U. Larsson et al.

1.2 The ruleset top entails


Top entails is played on heaps of tokens. A player may either remove the top token
from exactly one heap or split a heap into two nonempty heaps. If the top token is
removed from a heap, then the next move (in alternating play) must be played on the
same heap.
A heap with one token, say H, is a first player win in any situation. Namely, a move
in H forces the opponent to play in the same heap, where no move remains. Note that
the abstract game H = {∞ | ∞} settles this behavior. The player who moves first in
H wins independently of existence of other components. The point we wish to make
here is that this abstract representation settles the problem of independency of a heap
of size one with other disjunctive sum components.
Consider G in Figure 17.3, a pile of size 3. There are two options, as depicted in
Figures 17.4 and 17.5.

Figure 17.3: A pile G of top entails of size 3.

The option in Figure 17.4 splits G into two piles, and the next player’s options are unre-
stricted. By the terminating effect of playing in a heap of size one this composite game
should be equal to the game H = {∞ | ∞}.

Figure 17.4: The game G is split into two components.

The option in Figure 17.5 is an entailing move, and the next player must continue in
this component, even if other moves are available. Therefore the game form of the
entailing option in Figure 17.5 is

{∞ | 1 + 1, 1entail }

if Left just moved, where 1 denotes a heap of size one. The terminating threat forces
Right to play here, instead of possibly using other options.
Intuitively, either way of responding reduces to a game of the form H = {∞ | ∞}.
In conclusion, the heap of size three should be equal to the game H, and disjunctive
sum play has been restored. All this intuition will be rigorously justified in the coming
complete theory for affine impartial play.
Impartial games with entailing moves | 327

Figure 17.5: An entailing option.

It turns out that affine impartial games require only a small extension to the Sprague–
Grundy theory. Namely, the game in (17.1), obtained from the nimstring position in
Figure 17.2, equals the game H = {∞ | ∞} in the previous paragraph, modulo affine
impartial, and later we will devote the value K to the equivalence class of such games.

2 Affine literal forms and order


This section aims at Theorem 6, a comparison theorem for affine normal play that
suffices for the purpose of this paper. We begin by defining the fundamental concepts
for affine normal play, and we wait with the restriction to affine impartial until the next
section.
In classical combinatorial game theory, the normal play forms Np are recursively
constructed from the empty set. The form {⌀ | ⌀} = 0 is the only form of day zero and
the only form without options. The forms {0 | ⌀} = 1, {⌀ | 0} = −1, and {0 | 0} = ∗ are
born on day 1, and so on.
The forms of affine normal play, denoted Np ∞ , are recursively constructed from
the games ∞ (infinity) and ∞ (minus infinity) [5]. The forms ∞ and ∞ are the only
forms without options. The forms {∞ | ∞} = 0, {∞ | ∞} = ±∞, {∞ | ∞}, and {∞ | ∞}
are born on day zero, and so on.
The order of Np ∞ is defined in the standard way. Consider the four perfect play
outcome classes L (Left wins), N (Next player wins), P (Previous player wins), and
R (Right wins). From Left’s perspective, the first outcome is the best (she wins, re-
gardless of playing first or second), and the fourth is the worst (she loses, regardless
of whether playing first or second). On the other hand, regarding N and P , the vic-
tory depends on playing first or second, so these outcomes are not comparable. These
considerations explain the partial order in an “outcome diamond”.
328 | U. Larsson et al.

We write G ∈ L or, equivalently, o(G) = L if the outcome of G ∈ Np ∞ is Left wins,


and so on. The evaluation of games in Np ∞ is based on the following axiomatic list.

Axiom 1 (Absorbing nature of infinities). The infinities satisfy


1. ∞ ∈ L ;
2. ∞ ∈ R ;
3. For all X ∈ Np ∞ \ {∞}, ∞ + X = ∞;
4. For all X ∈ Np ∞ \ {∞}, ∞ + X = ∞;
5. “∞ + ∞ ” is not defined.

Addition of games is defined as usual, apart from items 3 and 4. The fifth item is
natural in terms of perfect play, since if ∞ appears, then ∞ cannot appear and vice
versa.
The definitions of equality and partial order of games are based on the outcome
diamond.

Definition 1 (Order and equality of games). Let G, H ∈ Np ∞ . Then G ≽ H if for every


form X ∈ Np ∞ \ {∞, ∞}, o(G + X) ⩾ o(H + X). Moreover, G = H if G ≽ H and H ≽ G.

Note that the exclusion of the infinities does not diminish the generality of the
definition, but is necessary due to Axiom 1.5. As usual, we have the following obser-
vations. If G = H, then replacing H by G or G by H do not hurt the players under
any circumstances. Similarly, if G ≽ H then replacing H by G does not hurt Left, and
replacing G by H does not hurt Right.

Theorem 2. Let G ∈ Np ∞ . Then ∞ ≽ G and G ≽ ∞.

Proof. If X ∈ Np ∞ \ {∞, ∞}, then, by Axiom 1.3, ∞ + X = ∞. Hence, by Axiom 1.1,


o(∞ + X) = o(∞) = L . Therefore, for every X ∈ Np ∞ \ {∞, ∞}, we have o(∞ + X) ⩾
o(G + X), and so ∞ ≽ G, proving that G ≽ ∞ is analogous.

The concept of a check is fundamental to Np ∞ . Indeed, this is an alternative and


perhaps more explicit, at least for those Chess playing readers, term for an entailing
move, as seen in the Introduction.

Definition 2 (Check games). Consider G ∈ Np ∞ . If ∞ ∈ Gℒ (∞ ∈ Gℛ ), then G is a Left-


check (Right-check). If G is a Left-check or a Right-check, then G is a check. Denote by
→ ←
G L (G R ) a Left (Right) option of G that is a Left-check (Right-check).

Of course, all checks are asymmetric, apart from the “trivial check” {∞ | ∞}.
A player would not use this check, because the opponent “check mates” by defend-
ing.

Definition 3 (Quiet games). Let G ∈ Np ∞ . If G ≠ ∞ (G ≠ ∞) and G is not a Left-check


(Right-check), then G is Left-quiet (Right-quiet). If G is Left-quiet and Right-quiet, then
G is quiet.
Impartial games with entailing moves | 329

Definition 4 (Conway forms and games). A game G ∈ Np ∞ is a Conway form if G ∈ ̸


{∞, ∞}, and G has no checks as followers. Let Np C ⊆ Np ∞ denote the substructure of
Conway forms. A game G ∈ Np ∞ is a Conway game if it equals a Conway form.

Example 3. The game G = {{∞ | ∞} | {∞ | ∞}} = {0 | 0} = ∗ is a Conway form (no


checks as followers). The game G′ = {{∞ | ∗} | {∗ | ∞}} is not a Conway form because
there are checks as followers. However, we will later see that G′ = G. Therefore G′ is a
Conway game.

In general, when we say form, we mean the literal form, and when we say game,
we usually mean (any member in) the full equivalence class of games. When we write
G ∈ Np ∞ , we usually refer to the literal form, but the context may decide.
Some classical theorems are still available in Np ∞ .

Theorem 4 (Fundamental theorem of affine normal play). If G ∈ Np ∞ , then G ≽ 0 if


and only if G ∈ L ∪ P .

Proof. Assume that G ≽ 0. We have 0 ∈ P , and so, by the order of outcomes, G ∈


L ∪ P.
Suppose now that G ∈ L ∪ P . If G = ∞, then G ≽ 0 by Theorem 2; hence assume
that G ≠ ∞. Let X ∈ Np ∞ \ {∞, ∞}.
If playing first, Left wins X with the option X L , then she also wins G + X with
the option G + X L . Essentially, she mimics the strategy used when X is played alone,
answering locally when Right plays in G. Due to the assumption G ∈ L ∪ P , this is a
winning strategy for Left in G + X.
If Left, playing second, wins X, then on G + X, she can respond to each of Right’s
moves locally with a winning move on the same component, because G ∈ L ∪ P .
Thus Left can win G + X playing second.
Therefore o(G + X) ⩾ o(X), and so G ≽ 0.

Corollary 1 (Order-outcome bijection). If G ∈ Np ∞ , then


– G ≻ 0 if and only if G ∈ L ;
– G = 0 if and only if G ∈ P ;
– G ‖ 0 if and only if G ∈ N ;
– G ≺ 0 if and only if G ∈ R .

Proof. The statement of Theorem 4 can equivalently be “G ≼ 0 if and only if G ∈ R ∪


P ”, so we can use that fact too.
Suppose that G ≻ 0. By Theorem 4, G ∈ L ∪ P . We cannot have G ∈ P , for
otherwise, G ∈ R ∪ P and G ≼ 0. Therefore G ∈ L . Conversely, suppose that G ∈ L . By
Theorem 4 we have G ≽ 0. We cannot have G = 0, for otherwise, G ≼ 0 and G ∈ R ∪ P .
Hence G ≻ 0. Thus the first equivalence holds.
The proof of the fourth equivalence is analogous.
330 | U. Larsson et al.

For the second equivalence, if G = 0, then G ≽ 0 ∧ G ≼ 0. So G ∈ (L ∪ P ) ∩ (R ∪


P) = P.
The third equivalence is a consequence of eliminating all other possibilities.

It is known that Np is a group. By Corollary 1 we may deduce that Np ∞ is only a


monoid. Namely, if G = {∞ | 0}, then for any X ∈ Np ∞ \ {∞, ∞}, G + X ∈ L ∪ N
(playing first, Left wins). Hence, for all X, G + X ≠ 0, and G is noninvertible. Thus,
in general, the comparison of G with H cannot be done by playing the game G − H,
because, sometimes, −H does not exist.
However, the following theorem shows that not everything is lost. The conjugate
of a given game switches roles of the players.

Definition 5 (Conjugate). The conjugate of G ∈ Np ∞ is

{
{ ∞ if G = ∞,
{
{
G = {∞ if G = ∞,
?
{ ? ?
{
{ ℛ ℒ
{G | G } otherwise,
{

where Gℒ denotes the set of literal forms GL for GL ∈ Gℒ , and similarly for Gℛ .
? ?

Theorem 5. If G ∈ Np ∞ is a Conway game, then G is invertible, and −G = G.


?

Proof. Suppose first that G is a Conway form. If G = 0, then G= 0, and thetheorem


?

holds. Otherwise, let us verify that G+ G is a P -position. If Left, playing first, chooses
?

GL + G, because this game is not ∞ (G is not a check), then Right can answer with
?

GL + GL , and by induction, because GL is a Conway form with no checks as followers,


?

that option is equal to zero. Since by Corollary 1 that option is a P -position, Right
wins. Analogous arguments work for the other options of the first player, and so G+ G
?

is a P -position. Again, G+ G= 0 by Corollary 1.


?

Suppose now that G is not a Conway form. Because it is a Conway game, by defi-
nition it is equal to some G′ ∈ Np C . The first paragraph proved that G′ + G′ = 0. Also,
?

? ?
by symmetry, G is equal to G′ . Therefore G′ + G′ = 0 implies G+ G= 0.
? ?

Lemma 1. Let G, H ∈ Np ∞ , and let J be an invertible form of Np ∞ . Then

G≽H if and only if G + J ≽ H + J.

Proof. (⇒) Consider any X ∈ Np ∞ \ {∞, ∞} and let X ′ = J + X. Since J is invertible,


J is neither ∞ nor ∞, and so X ′ is neither ∞ nor ∞. The definition of order implies
o(G + X ′ ) ⩾ o(H + X ′ ), that is, o(G + J + X) ⩾ o(H + J + X). Thus the arbitrariness of
X ∈ Np ∞ \ {∞, ∞} implies G + J ≽ H + J.

(⇐) Consider any X ∈ Np ∞ \ {∞, ∞} and let X ′ = −J + X (J is invertible, i. e., −J exists,


and J − J = 0). Since −J is invertible, −J is neither ∞ nor ∞, and so X ′ is neither ∞
Impartial games with entailing moves | 331

nor ∞. By the definition of order, o(G + J + X ′ ) ⩾ o(H + J + X ′ ), that is, o(G + J − J +


X) ⩾ o(H + J − J + X). Hence o(G + X) ⩾ o(H + X), and so, given the arbitrariness of
X ∈ Np ∞ \ {∞, ∞}, G ≽ H.

Theorem 6. Let G be any form of Np ∞ and suppose that H is an invertible form of Np ∞ .


Then

G≽H if and only if G − H ∈ L ∪ P, and G=H if and only if G − H ∈ P.

Proof. By Lemma 1, G ≽ H if and only if G −H ≽ H −H. Therefore we have G ≽ H if and


only if G − H ≽ 0. By Theorem 4 this is the same as G ≽ H if and only if G − H ∈ L ∪ P .
Finally, G = H if and only if G − H ∈ P by G ≽ H ∧ H ≽ G.

Theorem 7. Let G be any form of Np ∞ , and let H ∈ Np ∞ be a Conway game. Then


– G ≽ H if and only if G + H ∈ L ∪ P , and
?

– G = H if and only if G + H ∈ P .
?

Proof. These are direct consequences of Theorems 5 and 6.

In a follow-up paper [5], where we study the full game space Np ∞ , we provide
a solution of the general case of G ≽ H.

3 Affine impartial theory


To propose an extension of the Sprague–Grundy theory, we first define the concept of
an affine impartial game.3 Of course, rulesets like nimstring should be impartial.

Definition 6 (Symmetric game). Consider a form G ∈ Np ∞ . Then G is symmetric if G ∈ ̸


?
{∞, ∞} and Gℛ = Gℒ .

Definition 7 (Affine impartial). A form G ∈ Np ∞ is affine impartial if it is symmetric


and all quiet followers of G are symmetric. The subset of affine impartial games is
Im ∞ ⊂ Np ∞ .

Of course, a nonquiet game either has no option or is a check, and so (unless a triv-
ial check) is by definition asymmetric. However, this is the only exception of symmetry
in the world of affine impartial games. It is easy to check that Im∞ satisfies the stan-
dard closure properties of combinatorial games, that is, the closure of taking options,
addition, and conjugates.
The following result must hold for any class of games that claims to be “impartial”.

Theorem 8 (Affine impartial outcomes). If G is a symmetric form, then G ∈ N ∪ P .

3 In terms of ruleset: here “affine impartial” is an abbreviation of affine normal play impartial in the
sense that if the player to move cannot complete their move, then they lose.
332 | U. Larsson et al.

Proof. This proof uses a strategy-stealing argument. Suppose that G ∈ L . Then Left
wins G playing first with some option GL . Hence by symmetry Right wins G playing
first with GL , which contradicts G ∈ L . A similar argument holds against G ∈ R .
?

We want to restrict our analysis to Im∞ . Therefore we define equality modulo Im∞ .

Definition 8 (Impartial equality). Consider forms G, H ∈ Im∞ . Then G =Im∞ H if o(G +


X) = o(H + X) for every form X ∈ Im∞ .

Observation 9. Of course, G = H in Np ∞ implies G =Im∞ H. The opposite direc-


tion is not true. We can have G =Im∞ H and G ≠ H in Np ∞ if there is no distin-
guishing game in Im∞ . A simple example is G = {{∞ | 0}, 0 | {0 | ∞}, 0} and H =
{{∞ | ∗} , ∗ | {∗ | ∞} , ∗}. As we will see, these games are indistinguishable modulo
Im ∞ . However, the game X = {0 | −1} distinguishes them in Np ∞ ; playing first, Left
wins G + X but loses H + X.

It is easy to verify if a form in Im∞ equals a nimber.

Theorem 10 (Nimbers). Let G ∈ Im∞ . Then G =Im∞ ∗n if and only if G + ∗n ∈ P .

Proof. Suppose that G + ∗n ∈ P . By Theorem 7, G = ∗n modulo Np ∞ , and so


G =Im∞ ∗n.
Suppose now that G =Im∞ ∗n. By Theorem 8, G + ∗n ∈ N ∪ P , since impartiality
is closed under addition. If G + ∗n ∈ N , then since ∗n + ∗n ∈ P , we have G =Im
̸ ∞ ∗n,
a contradiction. Hence G + ∗n ∈ P .

Notation 11. Let nim ⊆ Im∞ denote the subset of affine impartial games that equal
nimbers.

It is well known that in classical combinatorial game theory the impartial values
are nimbers. We will prove that there is exactly one more value modulo Im∞ , a game
K, called moon. In [3, vol. 2, p. 398], we can read “A loony move is one that loses for
a player, no matter what other components are.” The following general definition is
motivated by that idea.

Definition 9 (Loony game). A game G ∈ Np ∞ is loony if for all quiet X ∈ Np ∞ ∩ (N ∪


P ), G + X ∈ N .

Thus in our interpretation a “loony move” exposes a loony game.


There are no loony games in Np . Suppose that G ∈ Np ∩ (P ∪ L ∪ R ) is a loony
game. Of course, G + 0 ∈ P ∪ L ∪ R , a contradiction. Suppose that G ∈ Np ∩ N is
a loony game. In that case, if n is large enough, then G + {n | 0} ∈ L , a contradiction,
since {n | 0} ∈ N is quiet.
There are loony games in Np ∞ . The obvious one is ±∞ = {∞ | ∞}, but we can
also have impartial quiet loony games. Consider G = {{∞ | 0}, 0 | {0 | ∞}, 0} and a
quiet X ∈ Np ∞ such that X ∈ P ∪ N . If X ∈ P , then the first player wins moving
Impartial games with entailing moves | 333

to X. If X ∈ N , then the first player wins moving to {∞ | 0} + X (Left) or to {0 | ∞} + X


(Right).

Notation 12. The moon is the game form K = {∞ | ∞}.

When a player moves to K + X for any X ∈ N ∪ P , he “goes to the moon” and


loses.

Theorem 13 (Loony uniqueness). All loony games are equal modulo Im∞ .

Proof. Consider two loony games G and G′ . We know that all quiet X ∈ Im∞ belong to
N ∪ P . By the definition of a loony game we have G + X ∈ N and G′ + X ∈ N . On the
other hand, if X ∈ Im∞ is not quiet, then ∞ ∈ X ℒ , and since X is impartial, ∞ ∈ X ℛ .
Hence G + X ∈ N and G′ + X ∈ N . In all cases, o(G + X) = o(G′ + X) = N , and the
theorem is proved.

Observation 14. Two loony games may be different modulo Np ∞ but equal modulo
Im ∞ . The games {{∞ | 0}, 0 | {0 | ∞}, 0} and {{∞ | 2} , 0 | {−2 | ∞} , 0} are loony. These
games are different modulo Np ∞ . Left, playing first, loses {{∞ | 0}, 0 | {0 | ∞}, 0} − 1
and wins {{∞ | 2} , 0 | {−2 | ∞} , 0} − 1. However, as will follow by theory developed
here, we cannot distinguish these two games modulo Im∞ .

To prove an affine impartial minimum excluded rule, we separate the options into
two classes.

Definition 10 (Immediate nimbers). Let G ∈ Im∞ . The set of G-immediate nimbers, de-
noted SG , is the set SG = Gℒ ∩ nim .

Note that, by symmetry, SG = Gℛ ∩ nim , and note that SK = ⌀.

Definition 11 (Protected nimbers). Consider a game form G ∈ Im∞ . The set of G-pro-
tected nimbers PG is
1. PG = nim , if ∞ ∈ Gℒ ;
→ →
2. PG = {∗n : G L + ∗n ∈ L , G L ∈ Gℒ } otherwise.


The second item says: if ∞ ∈ ̸ Gℒ , then ∗n ∈ PG if there is a check G L = {∞ |

Lℛ
G } ∈ Gℒ such that Right, playing first, loses G L + ∗n, that is, playing first, Left is
protected against those nimbers in a disjunctive sum.
Similarly to Definition 10, to obtain the same set, we could have defined PG with
respect to Right options.
Note that PK = nim . This statement holds for the literal form K = ±∞. However,
we can show that by using instead the form K = {{∞ | 0}, 0 | {0 | ∞}, 0} as in (17.1),
then PK = nim \ {0}. The output of “protected” is sensitive to which form we choose.
When the underlying game form is understood, we simply refer to the immediate
and protected nimbers, respectively.
334 | U. Larsson et al.

Example 15. Let G ∈ Im∞ be such that the Left options are 0, ∗2, and {∞ | {∗ | ∞}, 0}.
Of course, SG = {0, ∗2}. On the other hand, playing first, Left can use the check to win
G + ∗. Because of that, PG = {∗}. An important observation is that although Left is
protected against the nimber ∗, Left cannot force a Left move to ∗ in G, but if Right
moves to 0, then Left wins G + ∗ anyway

Sometimes, Right can manoeuvre Left’s eventual play to a nimber, or worse, via a
sequence of “check upon check”.

Definition 12 (Maneuverable form). A quiet form G ∈ Im∞ is maneuverable if after


each Left move that is neither a nimber nor ∞, Right can force, with checks, a Left
move to a nimber or a move by either player to ∞. A symmetrical effect happens after
each Right move that is neither a nimber nor ∞.

Example 16. The form G = {∗2, {∞ | {0, ∗4 | ∞}, 0} | ∗2, {0, {∞ | 0, ∗4}| ∞}} is maneu-
verable. If Left avoids the immediate nimber ∗2 by checking, then Right can still force
Left to move to one of the nimbers 0 or ∗4.

Lemma 2. If G ∈ Im∞ is maneuverable, then PG is finite.

Proof. After a Left first move in G, if needed, Right can force with checks a Left move
to a nimber or a move by either player to ∞. Let C be the set of nimbers that can arise
through this forcing strategy by Right. Then C is finite, because we study short games.
Let ∗n be a nimber such that for all ∗m ∈ C, we have n > m. In G + ∗n, after a first
check, say, to GL + ∗n, Right forces with checks a move by either player to ∞ or a Left
move to ∗m + ∗n (n > m). In the second case, after the sequence, Right wins with a
TweedleDee-TweedleDum move. Thus Left can protect against at most a finite number
of nimbers. That explains why PG is finite in case of maneuverable games.

Let mex(X) denote the smallest nonnegative integer not in X. Let 𝒢 denote the set
of Sprague–Grundy values of a set of nimbers, that is, if S = {∗ni }, then 𝒢 (S) = {ni }.

Lemma 3. If G ∈ Im∞ is maneuverable, then G equals the nimber ∗n, where n =


mex(𝒢 (SG ∪ PG )).

Proof. By Lemma 2 we know that SG ∪ PG is finite. Let n = mex(𝒢 (SG ∪ PG )). Let us
argue that the game G + ∗n ∈ P . If the first player moves in G to a nimber ∗m ∈ SG ,
then because n is excluded from 𝒢 (SG ), he loses. If the first player moves in G to a
quiet not nimber G′ , then because G′ is not a nimber, G′ + ∗n ∈ N (Theorem 10), and
the first player also loses. If the first player moves in G, giving a check, then because
n is excluded from 𝒢 (PG ), he also loses. Finally, if the first player moves to G + ∗n′
(n′ < n), then because n is the minimum excluded from 𝒢 (SG ∪ PG ), he loses because
the opponent has a direct TweedleDee-TweedleDum move or wins with a check. Hence
G = ∗n by Theorem 10.

Lemma 4. If G, H ∈ Im∞ are not nimbers, then G + H ∈ N .


Impartial games with entailing moves | 335

Proof. Consider G, H ∈ Im∞ \ nim . For a contradiction, assume that the sum of the
birthdays b = b(G) + b(H) is the smallest possible such that G + H ∈ P . Note that b > 0
by the assumptions on G and H. Without loss of generality, we will analyze the move
from G + H to G + H L . First, we prove two claims that concern local play in G and H,
respectively.

Claim 1. Playing second in G, Left can avoid Left moves to nimbers and moves by ei-
ther player to ∞ until the first Right-quiet move.

Proof of Claim 1. Suppose that Right, playing first in G, could force a Left move to a
nimber or a move by either player to ∞. If so, then in G + H, by giving checks in G,
← ←
Right could force some G R L⋅⋅⋅ R L + H = ∗n + H (Right’s turn) or a move by either player
to ∞ + H. Of course, the second situation would be a victory for Right. Regarding the
first case, at that moment the position would be ∗n + H. Because H is not a nimber, by
Theorem 10 we would have ∗n + H ∈ N , which is a winning move for Right. In either
case, Right, as first player, would win. That would contradict G + H ∈ P .

Claim 2. There is H L such that Left can avoid Left moves to nimbers and moves by
either player to ∞ until the first Right-quiet move.

Proof of Claim 2. This is exactly the same as saying that H is nonmaneuverable. If it


was maneuverable, then by Lemma 3 it would be a nimber, and we would have a con-
tradiction again.

Let us return to the move from G + H to G + H L . Because G + H ∈ P , Right has a


winning move from G + H L . However, by Claims 1 and 2, Left can play so that at any
stage before a Right-quiet move, Right is moving on g + h, where g is a follower of G,
and h is a follower of H L , such that neither g nor h is a nimber.
Either way, by assumption there is a winning quiet Right-move g R + h or g + hR .
Since these are impartial games, we must have g R + h ∈ P or g + hR ∈ P . Because h
and g are not nimbers, it follows by Theorem 10 that g R and hR are not nimbers.
Therefore we have g R + h ∈ P or g + hR ∈ P with both components not nimbers,
which contradicts the smallest birthday assumption. The result follows.

Theorem 17 (Affine impartial values). Every affine impartial form equals a nimber or
the game K (mod Im∞ ).

Proof. Let G ∈ Im∞ . If there is some ∗n such that G + ∗n ∈ P , then G =Im∞ ∗n by


Theorem 10.
Suppose next that G + ∗n ∈ N for all n, so that G does not equal a nimber modulo
Im . By Lemma 4, for all X ∈ Im ∞ \ nim , we also have G + X ∈ N . Hence, for all

X ∈ Im∞ , we have G + X ∈ N , and therefore G is a loony game. Because by Theorem 13


all loony games are equal modulo Im∞ and K is a loony game, we have G =Im∞ K.

Observation 18. A form can be loony modulo Im∞ and not loony modulo Np ∞ . An
example is the form G = {∗, {∞ | ∗} | ∗, {∗ | ∞}}. This game is not loony modulo Np ∞
336 | U. Larsson et al.

because if X = {0 | −1} ∈ N , then playing first, Left loses G + X. However, G =Im∞ K.


This follows by Theorem 17, since G does not equal any nimber; if Right starts G + ∗n,
then he wins by an appropriate parity consideration.

Theorem 19. The game K is absorbing modulo Im∞ , that is, K + Y =Im∞ K for all
Y ∈ Im∞ .

Proof. Since K = ±∞, regardless of what X ∈ Im∞ is, the first player wins both K +Y +X
and K + X. Therefore by the definition of equality of games, K + Y =Im∞ K.

Corollary 2. The game K is an idempotent modulo Im∞ , that is, K + K =Im∞ K.

Proof. This is a trivial consequence of Theorem 19.

Definition 13. The Sprague–Grundy value of the moon is 𝒢 (K) = ∞.

The following theorem explains how the Sparague–Grundy value of G ∈ Im∞ is


determined by the set SG ∪ PG .

Theorem 20 (Affine impartial minimum excluded rule). Let G ∈ Im∞ . We have the fol-
lowing possibilities:
– If SG ∪ PG = nim , then G = K and mex(𝒢 (SG ∪ PG )) = ∞;
– If SG ∪ PG ≠ nim , then G = ∗ (mex(𝒢 (SG ∪ PG ))).

Proof. If SG ∪ PG = nim , then G + ∗n ∈ N for all n. Because of that, G is not a nimber,


and G = K by Theorem 17. If SG ∪ PG ≠ nim , then we use the same argument of the
proof of Lemma 3.

Corollary 3. If all the options of a game G ∈ Im∞ are quiet, then G is a nimber.

Proof. If all the options of a game G ∈ Im∞ are quiet, then PG = ⌀. Therefore SG ∪ PG =
SG ≠ nim , and G = ∗ (mex(𝒢 (SG ))) by Theorem 20.

4 Case study: nimstring


In the introduction, we promised to show that the following component equals ∗2:
Impartial games with entailing moves | 337

Study the positions

All (a), (b), (c), (d), and (e) are 𝒫 -positions. The game value of (f) is K =
{{∞ | 0}, 0 | {0 | ∞}, 0}.

Other positions that equal K are the following:

In position (l) the central horizontal move is option (d), which is equal to 0. The
other options are (f) and (g), which are equal to K. Therefore the literal form is

l = {0, K, K | 0, K, K}

with Sl = {0} and Pl = ⌀. Applying the affine impartial minimum excluded rule, we
conclude that the position is ∗.
338 | U. Larsson et al.

Position (m) is also equal to ∗, that is, 0 + ∗.

Now we are ready for (n), a more complex situation. The literal form is

n = {h, i, k, {∞ | l} | h, i, k, {l | ∞}},

that is,

n = {K, K, K, {∞ | ∗} | K, K, K, {∗ | ∞}}.

Hence Sn = ⌀, and Pn = nim \ {∗}. Applying the affine impartial minimum excluded
rule, we conclude that the position is ∗.

Going back to the original question, we have the following:


Impartial games with entailing moves | 339

The form is {n, m, c | n, m, c}, that is, {∗, ∗, 0 | ∗, ∗, 0} = ∗2. Here n represents a play of
the top or bottom bar, m represents a play of some middle bar, and c represents play
of the left line.

5 Case study: top entails


We denote by n the literal form of a stack of size n. The literal form of the Left removal
of the top coin from a stack of size n is {∞ | (n − 1)ℛ } (and the symmetric from Right’s
point of view). With that in mind, let us compute the first few values. First, we do it
the tedious way, and then later after Theorem 21, we propose the slick recursive way
for a few more values in a table format.
Of course, 0 = {∞ | ∞}. The first player loses. Moreover,
– 1 = {{∞ | 0ℛ } | {0ℒ | ∞}} = {{∞ | ∞} | {∞ | ∞}}. Therefore S1 = ⌀ and P1 = nim .
Using the affine impartial minimum excluded rule, 1 = K. In the next step, for
ease, we will use the form K = ±∞.
– 2 = {1 + 1, {∞ | 1ℛ } | 1 + 1, {1ℒ | ∞}} = {K, {∞ | ∞} | K, {∞ | ∞}}. Therefore S2 = ⌀
and P2 = ⌀. Using the affine impartial minimum excluded rule, 2 = 0.
– 3 = {1 + 2, {∞ | 2ℛ } | 1 + 2, {2ℒ | ∞}}. This game is equal to {K, {∞ | K, {∞ | ∞}} |
K, {K, {∞ | ∞} | ∞}}. Therefore S3 = ⌀ and P3 = nim . Using the affine impartial
minimum excluded rule, 3 = K. In the next step, for ease, we will use the form
K = ±∞.
– 4 = {1 + 3, 2 + 2, {∞ | 3ℛ } | 1 + 3, 2 + 2, {3ℒ | ∞}}. This game is equal to {K, 0, {∞ |
∞} | K, 0, {∞ | ∞}}. Therefore S4 = {0} and P4 = ⌀. Using the affine impartial
minimum excluded rule, 4 = ∗.
– 5 = {1 + 4, 2 + 3, {∞ | 4ℛ } | 1 + 4, 2 + 3, {4ℒ | ∞}}. This game is equal to {K, K, {∞ |
0} | K, K, {0 | ∞}}. Therefore S5 = ⌀ and P5 = nim \ {0}. Using the affine impartial
minimum excluded rule, 5 = 0.
– 6 = {1 + 5, 2 + 4, 3 + 3, {∞ | 5ℛ } | 1 + 5, 2 + 4, 3 + 3, {5ℒ | ∞}}. This game is equal
to {K, ∗, K, {∞ | {0 | ∞}} | K, ∗, K, {{∞ | 0} | ∞}}. Therefore S6 = {∗} and P6 = {0}.
Using the affine impartial minimum excluded rule, 6 = ∗2.
– 7 = {1 + 6, 2 + 5, 3 + 4, {∞ | 6ℛ } | 1 + 6, 2 + 5, 3 + 4, {6ℒ | ∞}}. This game is equal
to {K, 0, K, {∞ | ∗, {{∞ | 0} | ∞}} | K, 0, K, {∗, {∞ | {0 | ∞}} | ∞}}. So S7 = {0},
P6 = nim \ {0, ∗}, and with the affine impartial minimum excluded rule, 7 = ∗.

Consider a stack of size n. We claim that an entailing move by Left does not protect
her against an element in Sn−1 . To see this, let ∗m ∈ Sn−1 . Moving in n + ∗m, if Left
chooses {∞ | (n − 1)ℛ } + ∗m, then Right answers ∗m + ∗m and wins. On the other
hand, we observe that an entailing move by Left does not protect her against the el-
ements of Pn−1 . To see this, let ∗m be an element of Pn−1 . Moving in n + ∗m, if Left
chooses {∞ | (n − 1)ℛ } + ∗m, then because in n − 1, Right is protected against ∗m, he
340 | U. Larsson et al.

has an entailing winning move in the first component. Therefore we have the general
recursion

Pn = nim \ (Sn−1 ∪ Pn−1 ).

The set Sn is composed of the values of the positions of the form ℓ + m, ℓ + m = n, ℓ, m >
0, and disregarding any sum where K appears. Hence the recurrence of top entails
is as follows.

Theorem 21. The sets P0 = S0 = ⌀, and Pn = nim \(Sn−1 ∪Pn−1 ) and Sn = {𝒢 (ℓ+m), ℓ, m ≠
K} for all n > 0.
Proof. This is explained in the above paragraph.

Now we can fill a table in an easy way.

n Sn Pn Sn ∪ Pn 𝒢-value (mex rule)

0 ⌀ ⌀ ⌀ 0
1 ⌀ nim nim ∞
2 ⌀ ⌀ ⌀ 0
3 ⌀ nim nim ∞
4 {0} ⌀ {0} 1
5 ⌀ nim\{0} nim\{0} 0
6 {∗} {0} {0, ∗} 2
7 {0} nim\{0, ∗} nim\{∗} 1
8 {0, ∗2} {∗} {0, ∗, ∗2} 3
9 {∗} nim\{0, ∗, ∗2} nim\{0, ∗2} 0
10 {0, ∗3} {0, ∗2} {0, ∗2, ∗3} 1
11 {0, ∗2} nim\{0, ∗2, ∗3} nim\{∗3} 3
12 {0, ∗, ∗2} {∗3} {0, ∗, ∗2, ∗3} 4

With the recursion, we know that n = K if and only if Sn−1 ∪ Pn−1 ⊆ Sn . That happens
for n = 2403, n = 2505, and n = 33243, as mentioned in [8]. One of three possibilities
must happen: a) a finite number of finite nimbers, b) a finite number of loony values,
and c) an infinite number of finite nimbers and an infinite number of loony values.
However, it is an open problem to know what case happens.
At the first Combinatorial Games Workshop at MSRI, John Conway proposed that
an effort should be made to devise some ruleset with entailing moves that is nontrivial
but (unlike top entails) susceptible to a complete analysis. As a sequel to this work,
we are finalizing a paper [6] with a proposal of a ruleset to meet Conway’s suggestion.
Impartial games with entailing moves | 341

Bibliography
[1] M. Albert, R. J. Nowakowski, D. Wolfe, Lessons in Play: An Introduction to Combinatorial Game
Theory, A. K. Peters, 2007.
[2] E. R. Berlekamp, The Dots and Boxes Game: Sophisticated Child’s Play, A. K. Peter’s Ltd., 2000.
[3] E. R. Berlekamp, J. H. Conway, R K. Guy, Winning Ways, Academic Press, London, 1982.
[4] J. H. Conway, On Numbers and Games, Academic Press, 1976.
[5] U. Larsson, R. J. Nowakowski, C. P. Santos, Combinatorial games with checks and terminating
moves, preprint.
[6] U. Larsson, R. J. Nowakowski, C. P. Santos, Electric cables, preprint.
[7] A. N. Siegel, Combinatorial Game Theory, American Math. Soc., 2013.
[8] J. West, New values for top entails, in Games of No Chance, MSRI Publications, 29, 345–350,
1996.
James B. Martin
Extended Sprague–Grundy theory for locally
finite games, and applications to random
game-trees
Abstract: The Sprague–Grundy theory for finite games without cycles was extended
to general finite games by Cedric Smith and by Aviezri Fraenkel and coauthors. We
observe that the same framework used to classify finite games also covers the case of
locally finite games (that is, games where any position has only finitely many options).
In particular, any locally finite game is equivalent to some finite game. We then study
cases where the directed graph of a game is chosen randomly and is given by the tree
of a Galton–Watson branching process. Natural families of offspring distributions dis-
play a surprisingly wide range of behavior. The setting shows a nice interplay between
ideas from combinatorial game theory and ideas from probability.

1 Introduction
Among the plethora of beautiful and intriguing examples to be found in Elwyn
Berlekamp, John Conway, and Richard Guy’s Winning Ways, there is the game of
Fair Shares and Varied Pairs [1, Ch. 12]. The game is played with some number of
almonds, which are arranged into heaps. A move of the game consists of either
– dividing any heap into two or more equal-sized heaps (hence “fair shares”), or
– uniting any two heaps of different sizes (hence “varied pairs”).

The only position from which no move is possible is the one where all the almonds are
completely separated into heaps of size 1. When that position is reached, the player
who has just moved is the winner.
Fair Shares and Varied Pairs is a loopy game: the directed graph of game po-
sitions has cycles, so the game can return to a previously visited position. The way in
which the loopiness manifests itself depends on the number of almonds:
– With three or fewer almonds, there are no cycles. The game is nonloopy.

Acknowledgement: Many thanks to Alexander Holroyd and Omer Angel for conversations relating to
this work, particularly during the Montreal summer workshop on probability and mathematical physics
at the CRM in July 2018. I am grateful to an anonymous referee, whose comments have considerably
improved the paper.

James B. Martin, Department of Statistics, University of Oxford, Oxford, United Kingdom, e-mail:
martin@stats.ox.ac.uk

https://doi.org/10.1515/9783110755411-018
344 | J. B. Martin

– With four to nine almonds, the graph has loops, but all positions are equivalent
to finite nim heaps. Hence in any position, either the first player has a winning
strategy, or the second player has a winning strategy; furthermore, the same is
true for the (disjunctive) sum of any two positions or for the sum of a position
with a nim heap. Berlekamp, Conway, and Guy call such behavior latently loopy.
“This kind of loopiness is really illusory; unless the winner wants to take you on
a trip, you won’t notice it.”
– With 10 almonds, still any position has either a forced win for the first player or
a forced win for the second player. However, now there exist some patently loopy
positions that are not equivalent to finite nim heaps. If we take the sum of two
such positions, or the sum of such a position with a nim heap, we can obtain a
game where neither player has a winning strategy – the game is drawn with best
play.
– With 11 or more almonds, there exist blatantly loopy positions where the game is
drawn with best play.

In this paper, we explore similar themes, but we concentrate particularly on cases


where the possibility of draws comes not necessarily from cycles in the game-graph,
but instead from infinite paths. (Although the game-graph may be infinite, from any
given position there will be only finitely many possible moves.)
We also focus on situations where the directed graph of the game is chosen at
random. The randomness is only in the choice of the graph, that is, of the “rules of
the game”. All the games themselves will be combinatorial games in the usual sense,
with full information and with no randomness.
Here is an example. Consider a population where each individual reproduces with
some given probability p ∈ (0, 1). If an individual reproduces, then it has four children.
We start with a single individual (the “root”). With probability 1 − p, the root has no
children, and with probability p, the root has four children, forming generation 1. If
the root has no children, then in turn each of those children itself has no children
with probability 1 − p, and has four children with probability p. The collection of those
families forms generation 2, whose members again go on to reproduce in the same
way, and so on. All the decisions are made independently. From the family tree of this
process we form a directed graph by taking the individuals as vertices and adding an
arc from each vertex to each of its children. This is an example of a Galton–Watson tree
(or Bienaymé tree). In the game played on this tree, from every position there are either
zero or four possible moves. Again we consider normal play: if there are no moves
possible from a position, then the next player to move loses.
Note, for example, that the tree could be trivial: with probability 1 − p, it consists
of just a single vertex, or it could be larger but finite (its size can be any integer that is
congruent to 1 mod 4), as shown in the example in Figure 18.1. However, the tree can
also be infinite.
Extended Sprague–Grundy theory for locally finite games | 345

Figure 18.1: An example of a finite directed graph that could arise from the Galton–Watson tree
model considered in the introduction with out-degrees 0 and 4.

The game played with such a tree as its game-graph displays very interesting parallels
with that of Fair Shares and Varied Pairs described above. The behavior depends
on the value of the parameter p. We will find that there are thresholds a0 = 1/4, a1 ≈
0.52198, and a2 = 53/4 /4 ≈ 0.83593 such that the following hold.
– For p ≤ a0 , the tree is finite with probability 1.
– For a0 < p ≤ a1 , there is positive probability that the tree is infinite. However,
with probability 1, all its positions are equivalent to finite nim heaps, and so in
particular every position has a winning strategy for one or the other of the players.
– For a1 < p ≤ a2 , still with probability 1, every position has a winning strategy for
one player or the other. However, there is now positive probability that the tree has
positions that are not equivalent to finite nim heaps. The sum of two such games,
or the sum of such a game with a nim heap, may be drawn with best play.
– For p > a2 , with positive probability, the tree has positions that are drawn with
best play.

1.1 Background and outline of results


The equivalence of any finite loop-free impartial game to a nim heap was shown in-
dependently by Roland Sprague and by Patrick Grundy in the 1930s. Richard Guy was
a key figure in developing and broadening the scope of the Sprague–Grundy theory
in the next couple of decades, notably, for example, in his 1956 paper with Cedric
Smith [7].
An extension of the Sprague–Grundy theory to finite games that may contain cy-
cles was first described by Smith [13] and was developed extensively in a series of
works by Aviezri Fraenkel and coauthors (e. g., [4, 2, 5]). As well as finite-rank games
that are equivalent to nim heaps, we now additionally have infinite-rank games that
are not equivalent to nim heaps. The “extended Sprague–Grundy value” (or “loopy
nim value”) of such a game is written in the form ∞(𝒜), where 𝒜 ⊂ ℕ is the set of
nim values of the game finite-rank options. These infinite-rank games may be either
346 | J. B. Martin

first-player wins (if 0 ∈ 𝒜) or draws (if 0 ∉ 𝒜). Again, we have equivalence between
two games if and only if they have the same (extended) Sprague–Grundy value.
Smith [13] already envisages extensions of the theory to infinite games, involving
ordinal-valued Sprague–Grundy functions. An extension of a different sort to infinite
graphs was done by Fraenkel and Rahat [3], who extend the finite nonloopy Sprague–
Grundy theory to infinite games that are locally path-bounded in the sense that for any
vertex of the game-graph, the set of paths starting at that vertex has finite maximum
length.
In this paper, we observe that the extended Sprague–Grundy values, which clas-
sify finite games, are also enough to classify the class of locally finite games in which
every position has finitely many options. As a result, any such locally finite (perhaps
cyclic) game is equivalent to a finite (perhaps cyclic) game.
We then focus in particular on applying the theory to games whose directed graph
is given by a Galton–Watson tree, of which the 0-or-4 tree described in the previous
section is an example. Galton–Watson trees provide an extremely natural model of
a random game-tree. They have a self-similarity, which can be described as follows:
the root individual has a random number of children (distributed according to the off-
spring distribution), and then conditional on that number of children, the sub-trees of
descendants of each of those children are independent and have the same distribution
as the original tree.
Games on Galton–Watson trees (including normal play, misère play, and other
variants) are studied by Alexander Holroyd and the current author in [9]. There a par-
ticular focus was on determining which offspring distributions give positive probabil-
ity of a draw and on describing the type of phase transition that occurs between the
sets of distributions with and without draws. In this paper, we concentrate on normal
play; but, armed with the extended Sprague–Grundy theory, we can investigate, for ex-
ample, whether infinite-rank positions occur in games without draws (the case anal-
ogous to Berlekamp, Conway, and Guy’s “patently loopy” behavior described above).
This setting shows a very nice interplay between ideas from combinatorial game the-
ory and ideas from probability.
One tool on which we rely heavily is the study of the behavior of the game-graph
when the set 𝒫 of its second-player-winning positions is removed. This reduction be-
haves especially nicely in the Galton–Watson setting. For example, if we take a Galton–
Watson tree for which draws have probability 0, condition the root to be a first-player
win, and remove the set 𝒫 , then the remaining component connected to the root is
again a Galton–Watson tree with a new offspring distribution. Combining iterations
of this procedure with recursions involving the probability generating function of the
offspring distribution yields a lot of information about the infinite-rank positions that
can occur in the tree.
We finish by presenting three particular examples of families of offspring distri-
bution: the Poisson case, the geometric case, and the 0-or-4 case described above. In
these examples alone, we see a surprisingly wide variety of different types of behavior.
Extended Sprague–Grundy theory for locally finite games | 347

We now briefly describe the organization of the paper. In Section 2, we describe the
extended Sprague–Grundy theory for locally finite games. Although the setting is new,
the results can be written in a form that is almost identical to that of the finite case.
We proceed in a way that closely parallels the presentation of Siegel from Section IV.4
of [12] (with some variations of notation). The proofs given in [12] also carry over to the
current setting essentially unchanged, and for that reason, we do not reproduce them
here. A reader who is not already familiar with the extended Sprague–Grundy theory
for finite games may like to start with that section of [12] before reading on further
here.
In Section 3, we discuss the operation of removing 𝒫 -positions from a locally finite
game, and examine its effect on the Sprague–Grundy values of the positions which
remain. For the particular case of trees, we give an interpretation involving mex la-
bellings (labellings of the vertices of the tree by natural numbers which obey mex re-
cursions at each vertex).
In Section 4, we introduce games on Galton–Watson trees and develop the analy-
sis via graph reductions and generating function recursions.
Finally, examples of particular offspring distributions are studied in Section 5.

2 Extended Sprague–Grundy theory for games with


infinite paths
In this section, we introduce basic notation and definitions, and then describe the
extended Sprague–Grundy theory for locally finite games. The results look identi-
cal to those that have previously been written for the case of finite games. Proofs of
these results, written for the case of finite games but equally applicable here, can be
found in Section IV.4 of [12]. However, note that formally speaking, the content of
the results is different; this is not just because the scope of the statements is broader,
but also because the definition of equivalence is different (see the discussion in Sec-
tion 2.3).

2.1 Directed graphs and games


We will represent impartial games by directed graphs. If V is a directed graph, then we
call the vertices of V positions. If there is an arc from x to y in V, then we write y ∈ Γ(x)
(or y ∈ ΓV (x) if we want to specify the graph V) – here Γ(x) is the set of options (i. e.,
out-neighbors) of x. We say that the graph V is locally finite if all its vertices have finite
out-degree; that is, Γ(x) is a finite set for each vertex x. We may be deliberately loose
in using the same symbol V to refer both to the graph and to the set of vertices of the
graph.
348 | J. B. Martin

Informally, we consider two-player games with alternating turns; each turn con-
sists of moving from a position x to a position y, where y ∈ Γ(x). We consider normal
play: if we reach a terminal position, meaning a vertex with out-degree 0, then the next
player to move loses. Since the graphs we consider may have cycles or infinite paths,
it may be that play continues forever without either player winning.
Formally, a locally finite game is a pair G = (V, x) where V is a locally finite di-
rected graph (which is allowed to contain cycles) and x is a vertex of V. We will often
write just x instead of (V, x) when the graph V is understood. For example, for the out-
come function 𝒪, the Sprague–Grundy function 𝒢 , and the rank function (all defined
below), we will often write 𝒪(x), 𝒢 (x), and rank(x), rather than 𝒪((V, x)), 𝒢 ((V, x)),
and rank((V, x)). We use the fuller notation when we need to consider more than one
graph simultaneously (e. g., when considering disjunctive sums of games, or when
considering operations which reduce a graph by removing some of its vertices).
Let V be a directed graph, and let o be a vertex of V. If o has in-degree 0, and if
for every x ∈ V, there exists a unique directed walk from o to x, then we say that V is a
tree with root o. If x and y are vertices of a tree V with y ∈ ΓV (x), then we may say that
y is a child of x in V. We write height(x) for the height of x, which is the number of arcs
in the path from o to x.

2.2 Outcome classes


For a graph V, each position x ∈ V falls into one of three outcome classes.
– If the first player has a winning strategy from x, then we write x ∈ 𝒩 , or 𝒪(x) = 𝒩 ,
and say that x is an 𝒩 -position.
– If the second player has a winning strategy from x, then we write x ∈ 𝒫 , or 𝒪(x) =
𝒫 , and say that x is a 𝒫 -position.
– If neither player has a winning strategy from x, so that with optimal play the game
continues forever without reaching a terminal position, we write x ∈ 𝒟 or 𝒪(x) =
𝒟 and say that x is a 𝒟-position.

Theorem 2.1. Let V be a locally finite graph, and let x ∈ V.


– x is a 𝒫 -position if and only if every y ∈ Γ(x) is an 𝒩 -position.
– x is an 𝒩 -position if and only if some y ∈ Γ(x) is a 𝒫 -position.
– x is a 𝒟-position if and only if no y ∈ Γ(x) is a 𝒫 -position, but some y ∈ Γ(x) is a
𝒟-position.

2.3 Disjunctive sums and equivalence between games


Let V and W be directed graphs. We define V × W to be the directed graph whose
vertices are {(x, y), x ∈ V, y ∈ W} and which has an arc from (u, v) to (x, y) if and only if
Extended Sprague–Grundy theory for locally finite games | 349

either u = x and y ∈ ΓW (v), or x ∈ ΓV (u) and v = y. If V and W are both locally finite,
then so is V × W.
If G = (V, x) and H = (W, y) are locally finite games, we define their (disjunctive)
sum G + H as the locally finite game (V × W, (x, y)).
We have the following interpretation. A position of V × W is an ordered pair of a
position of V and a position of W. To make a move in the sum of games from position
(x, y) of V × W, we must either move from x to one of its options in V, or from y to one
of its options in W (and not both). A position (x, y) is terminal for V × W if and only if
x is terminal for V and y is terminal for W.
Now we define equivalence between two locally finite games G and H. The
games G and H are said to be equivalent, denoted by G = H, if 𝒪(G + X) = 𝒪(H + X)
for every locally finite game X.
Note here that we have defined equivalence within the class of locally finite games:
we required the equality to hold for every locally finite game X. The definition (and the
meaning of the results below) would be different if X ranged over a different set. How-
ever, it will follow from the extended Sprague–Grundy theory below that this equiva-
lence extends both the equivalence within the class of finite loopfree graphs, and that
within the class of finite graphs. In other words, two finite games are equivalent within
the class of finite games if and only if they are equivalent with the class of locally finite
games; also, two finite loopfree games are equivalent within the class of finite loopfree
games if and only if they are equivalent within the class of finite games.

2.4 The rank function and the Sprague–Grundy function


Let V be a locally finite directed graph. We recursively define 𝒢n (x) for x ∈ V and n ≥ 0
as follows. First, let

0 if x is terminal,
𝒢0 (x) = {
∞ otherwise.

Then for n ≥ 1 and given x, write m = mex{𝒢n−1 (y), y ∈ Γ(x)}, and let

{m if for each y ∈ Γ(x), either 𝒢n−1 (y) ≤ m, or there is z ∈ Γ(y)


{
𝒢n (x) = { with 𝒢n−1 (z) = m;
{
{∞ otherwise.

Proposition 2.2. Let x ∈ V. Then


– either 𝒢n (x) = ∞ for all n, or
– there exist m and n0 such that

∞ if n < n0 ,
𝒢n (x) = {
m if n ≥ n0 .
350 | J. B. Martin

In the light of Proposition 2.2, we can now define the extended Sprague–Grundy
function 𝒢 in the case of a locally finite graph V. Let x ∈ V. If the second case of Propo-
sition 2.2 holds, and 𝒢n (x) = m for all sufficiently large n, then 𝒢 (x) = m. Otherwise,
we write

𝒢n (x) = ∞(𝒜),

where 𝒜 is the finite set defined by

𝒜 = {a ∈ ℕ : 𝒢 (y) = a for some y ∈ Γ(x)}.

We then define the rank of x, written rank(x), to be the least n such that 𝒢n (x) is finite,
or ∞ if no such n exists. (Hence the finite-rank vertices are those x with 𝒢 (x) = m ∈ ℕ,
whereas the infinite-rank vertices are those x with 𝒢 (x) = ∞(𝒜) for some 𝒜 ⊂ ℕ.)
Some examples of extended Sprague–Grundy values can be found in Figure 18.2.

Theorem 2.3.
(a) 𝒢 (x) = 0 if and only if 𝒪(x) = 𝒫 .
(b) If 𝒢 (x) is a positive integer, then 𝒪(x) = 𝒩 .
(c) If 𝒢 (x) = ∞(𝒜) for a set 𝒜 with 0 ∈ 𝒜, then 𝒪(x) = 𝒩 .
(d) 𝒢 (x) = ∞(𝒜) for some 𝒜 with 0 ∉ 𝒜 if and only if 𝒪(x) = 𝒟.

Theorem 2.3 tells us that the Sprague–Grundy value of a position determines its
outcome class. In fact, much more is true: the Sprague–Grundy values of two games
determines the Sprague–Grundy value, and hence the outcome class, of their sum.
The algebra of the Sprague–Grundy values is the same as in the case of finite loopy
graphs, and full details can be found at the end of Section IV.4 of [12]. Again the proofs
carry over unchanged to the locally finite setting. We note a few particular conse-
quences.

Theorem 2.4. Let G and H be locally finite games.


(a) G + H has infinite rank if and only if at least one of G and H have infinite rank.
(b) If both G and H have infinite rank, then 𝒢 (G + H) = ∞(0), and in particular 𝒪(G +
H) = 𝒟.
(c) If 𝒢 (G) = m ∈ ℕ, then G is equivalent to ∗m, a nim heap of size m.
(d) G and H are equivalent if and only if 𝒢 (G) = 𝒢 (H).

Corollary 2.5. Every locally finite game is equivalent to some finite game.

We finish the section by recording the following consequence of the construction


of the extended Sprague–Grundy function in a form that will be useful for later refer-
ence.
Extended Sprague–Grundy theory for locally finite games | 351

Proposition 2.6. Let V be a locally finite graph, and let x ∈ V. Then the following are
equivalent:
(a) rank(x) ≤ n, and 𝒢 (x) = m;
(b) the following two properties hold:
(i) for each i such that 0 ≤ i ≤ m − 1, there exists yi ∈ Γ(x) such that rank(y) < n
and 𝒢 (yi ) = i;
(ii) for all y ∈ Γ(x), either rank(y) < n and 𝒢 (y) < m, or there is z ∈ Γ(y) with
rank(z) < n and 𝒢 (z) = m.

3 Reduced graphs
Let k ≥ 0. We will say that a locally finite directed graph V is k-stable if whenever x ∈ V
has infinite rank (i. e., whenever 𝒢 ((V, x)) = ∞(𝒜) for some 𝒜), then {0, 1, . . . , k} ⊆ 𝒜.
Note that by Theorem 2.3(d), being 0-stable is equivalent to being draw-free: every
position of V has a winning strategy either for the first player or for the second player.
Let 𝒫V be the set of 𝒫 -positions of the graph V, in other words those x ∈ V with
𝒢 ((V, x)) = 0. Consider the graph R(V) := V \ 𝒫V , which results from removing the
𝒫 -positions from V (and retaining all arcs between remaining vertices). More gen-
erally, for k ≥ 1, let Rk (V) be the graph resulting from removing all vertices x with
𝒢 ((V, x)) < k.

Theorem 3.1. Let V be a locally finite directed graph, and let x ∈ R(V).
(a) If x has finite rank in V, then also x has finite rank in R(V); specifically,

𝒢 ((R(V), x)) = 𝒢 ((V, x)) − 1.

(b) Suppose additionally that V is draw-free. If x has infinite rank in V, then also x has
infinite rank in R(V); specifically, if 𝒢 ((V, x)) = ∞(𝒜) for some 𝒜 (in which case
necessarily 0 ∈ 𝒜), then

𝒢 ((R(V), x)) = ∞(𝒜 − 1),

where 𝒜 − 1 denotes the set {a ≥ 0 : a + 1 ∈ 𝒜}.

If V is not draw-free, then the conclusion of part (b) may fail; removing the
𝒫 -positions may convert infinite-rank vertices to finite-rank vertices (either 𝒫 -posi-
tions or finite-rank 𝒩 -positions). See Figure 18.2 for an example.

Corollary 3.2. Let k ≥ 1.


(a) Suppose that V, R(V), . . . , R(k) (V) are all draw-free. Then R(k+1) (V) = R(R(k) (V)).
(b) V is k-stable if and only if V, R(V), . . . , R(k) (V) are all draw-free.
352 | J. B. Martin

Figure 18.2: The conclusion of Theorem 3.1(b) may fail when the graph is not draw-free. Here, remov-
ing the unique 𝒫-position e from the graph on the left, to give the graph on the right, converts the
position a from infinite rank to finite rank. The extended Sprague–Grundy values are shown by the
nodes in red.

Proof of Theorem 3.1. (a) For the first part, we use induction on the rank of x in V. We
claim that if x ∈ R(V) has rank((V, x)) = n and 𝒢 ((V, x)) = m > 0, then rank((R(V), x)) ≤
n and 𝒢 ((R(V), x)) = m − 1.
Any x with rank 0 in V is in 𝒫V and hence is not a vertex of R(V), so the claim
holds vacuously for x with rank((V, x)) = 0.
Now for n > 0, suppose the claim holds for all x with rank((V, x)) < n, and consider
x ∈ R(V) with rank((V, x)) = n and 𝒢 ((V, x)) = m.
From Proposition 2.6 we have the following properties:
(i) for each i = 0, . . . , m − 1, there exists yi ∈ ΓV (x) such that rank((V, yi )) < n and
𝒢 ((V, yi )) = i;
(ii) for all y ∈ ΓV (x), either rank((V, y)) < n and 𝒢 ((V, y)) < m, or there is z ∈ ΓV (y)
with rank((V, z)) < n and 𝒢 ((V, z)) = m.

Applying the induction hypothesis we get:


(i) for each i = 1, . . . , m − 1, there exists yi ∈ ΓR(V) (x) such that rank((R(V), yi )) < n and
𝒢 ((R(V), yi )) = i − 1;
(ii) for all y ∈ ΓR(V) (x), either rank(R(V), y) < n and 𝒢 ((R(V), y)) < m − 1, or there is
z ∈ ΓR(V) (y) with rank((R(V), z)) < n and 𝒢 ((R(V), z)) = m − 1.

Using Proposition 2.6 again, we conclude that rank((R(V), x)) ≤ n and 𝒢 ((R(V), x)) =
m − 1, completing the induction step.
(b) Now we suppose that in addition V is draw-free. We first want to show that
if x has finite rank in R(V), then it also has finite rank in V. In this case, we work by
induction on the rank of x in R(V).
If x has rank 0 in R(V), (i. e., if x is terminal in R(V)), then all options of x in V are
in 𝒫V (i. e., 𝒢 ((V, y)) = 0), which gives 𝒢 ((V, x)) = 1.
Now let n > 1. Assume that any vertex with rank less than n in R(V) has finite rank
in V, and consider any vertex x with rank n in R(V), say 𝒢n ((R(V), x)) = m.
Then by Proposition 2.6 again,
Extended Sprague–Grundy theory for locally finite games | 353

(i) There are y0 , y1 , . . . , ym−1 ∈ ΓR(V) (x) such that for each i, rank((R(V), yi )) < n and
𝒢 ((R(V), yi )) = i. Then by the induction hypothesis, rank((V, yi )) < ∞, and part
(a) gives 𝒢 ((V, yi )) = i + 1.
(ii) For all y ∈ ΓR(V) (x), either rank((R(V), y)) < n and 𝒢 ((R(V), y)) < m, or there is z ∈
ΓR(V) (y) such that rank((R(V), z)) < n and 𝒢 ((R(V), z)) = m. Then by the induction
hypothesis and part (a) again, either 𝒢 (V, y) < m + 1, or there is such a z with
𝒢 (V, z) = m + 1.

Now consider two possibilities. First, suppose there is y ∈ ΓV (x) with 𝒢 ((V, y)) = 0.
Then for some large enough n′ , we get 𝒢n′ ((V, x)) = m + 1, and indeed x has finite rank
in V. Alternatively, there is no such y. Then if x had infinite rank in V, then we would
have 𝒢 ((V, x)) = ∞(𝒜) for some 𝒜 with 0 ∉ 𝒜. This would contradict the assumption
that V is draw-free. Hence again x must have finite rank in V, as required.

3.1 Mex labelings and interpretation of k-stability in the case of


trees
The material in this section is not used in the later analysis, but it aims to give helpful
intuition about the notion of k-stability in the case of trees, showing that it can be
interpreted in terms of consistency of the set of vertices labeled 0, 1, . . . , k across all
labelings that locally respect the mex recursions.
Let V be a locally finite directed graph. We call a function f : V → ℕ a mex labeling
of V if for all x ∈ V, f (x) = mex{f (y), y ∈ ΓV (x)}.
Of course, if V is finite and loop-free, then there is a unique mex labeling f of V
given by f (x) = 𝒢 ((V, x)) for x ∈ V.
Notice also that any locally finite tree has at least one mex labeling. To see this,
we can consider the sequence of finite graphs (Vn , n ∈ ℕ), where Vn is the induced
subgraph of V containing all vertices x such that height(x) ≤ n. Each such Vn is finite
and loop-free and so has a mex labeling fn . In any mex labeling the vertex x has value
no greater than the out-degree of x (which is finite by assumption). Then a compact-
ness/diagonalization argument shows that there exists a labeling f : V → ℕ that, on
any finite subset W ⊂ V, agrees with infinitely many of the fn . In particular, for any
vertex x, f agrees with one of the fn on {x} ∪ ΓV (x). Then f obeys the mex recursion at
every such vertex x, so f is indeed a mex labeling of V.

Proposition 3.3. Let V be a locally finite tree, and let k ∈ ℕ.


(a) Suppose V is k-stable. Then the set {x ∈ V : f (x) = k} is the same for all mex
labelings f of V and is equal to {x ∈ V : 𝒢 ((V, x)) = k}.
(b) Suppose V is not k-stable but is (k−1)-stable. (Ignore the vacuous condition of (k−1)-
stability for k = 0.) Let x ∈ V with 𝒢 (x) = ∞(𝒜) for some 𝒜 not containing k. Then
there are mex labelings f and f ′ of V with f (x) = k and f ′ (x) ≠ k.
354 | J. B. Martin

Note that the conclusion of part (b) can fail even for graphs that are acyclic in
the sense of having no directed cycles. See Figure 18.3 for an example. (The method
of proof below makes clear that the result does extend to bipartite graphs with no
directed cycles.)

Figure 18.3: An example showing that the conclusion of Proposition 3.3(b) can fail even for “loop-
free” graphs (i. e., graphs with no directed cycle). The directed graph with vertex set {ai , i ∈
ℕ} ∪ {bi , i ∈ ℕ} and arcs from ai to bi , from ai to ai+1 , and from bi to bi+1 for each i. There are two
mex labelings, one shown in red above the vertices and the other shown in blue below the vertices.
Every position has Sprague–Grundy value ∞(0), and the graph is not 0-stable. However, the posi-
tions bi have value 0 in both mex labelings, whereas the positions ai have nonzero values in both
mex labelings.

Proof. We start by proving that if x ∈ V has finite rank with 𝒢 (x) = m, then f (x) = m
for all mex labelings f of V. (This holds for any locally finite directed graph V.)
We proceed by induction on rank(x). Let f be any mex labeling of V.
If rank(x) = 0, then x has no options. Then 𝒢 (x) = 0, and so f (x) = mex(0) = 0.
Now suppose that rank(x) = n > 0, 𝒢 (x) = m, and the statement holds for all
vertices of rank less than n.
From Proposition 2.6, for each i with 0 ≤ i ≤ m − 1, there exists yi ∈ Γ(x) with
𝒢 (yi ) = i and rank(yi ) < n. Hence f (yi ) = i.
Also, for every y ∈ Γ(x) with 𝒢 (y) ≥ m, there is z ∈ Γ(y) with rank(z) < n and
𝒢 (z) = m. Then f (z) = m, and hence f (y) ≠ m.
Thus x has options on which f takes value 0, 1, . . . , m − 1, but no option on which f
takes value m. This gives f (x) = m as required.
To complete the proof of part (a), suppose that V is k-stable, and let f be any mex
labeling of V. Then any vertex x with infinite rank has 𝒢 (x) = ∞(𝒜) for some 𝒜 with
k ∈ 𝒜. Hence there exists y ∈ Γ(x) with 𝒢 (y) = k, giving f (y) = k. Then f (x) ≠ k. So,
indeed, the set of vertices x with f (x) = k is exactly the set of x with 𝒢 (x) = k.
We turn to part (b), starting with the case k = 0. Suppose that V is a locally fi-
nite tree that is not 0-stable. Let x be any vertex with 𝒢 (x) = ∞(𝒜) for some 𝒜 not
containing 0 (i. e., x ∈ 𝒟).
Take any n ≥ height(x). Since the game from position x is drawn, if we consider the
game on the truncated graph Vn described just before the statement of the proposition,
so that all vertices at height n become terminal, then position x becomes a first-player
win if n − height(x) is odd, and a second-player win if n − height(x) is even.
Extended Sprague–Grundy theory for locally finite games | 355

Then we can apply again the compactness argument mentioned before the state-
ment of Proposition 3.3, separately for odd n and even n. This yields two mex labelings
f and f ′ , one of which gives value 0 to x, and the other of which gives a strictly positive
value to x, as required. This completes the proof of part (b) in the case k = 0.
Now we extend to k > 0. Suppose V is (k − 1)-stable but not k-stable. As in Corol-
lary 3.2, we can apply the reduction operator k−1 times, removing all the vertices y ∈ V
with 𝒢 ((V, y)) < k to arrive at the graph Rk (V).
Any v ∈ ℝk (V) either has 𝒢 ((V, x)) = m for some finite m ≥ k, or 𝒢 ((V, x)) = ∞(𝒜)
for some 𝒜 with {0, . . . , k − 1} ⊆ 𝒜. It is then easy to check that whenever f ̂ : Rk (V) 󳨃→ ℕ
is a mex labeling of Rk (V), we can obtain a mex labeling f : V 󳨃→ ℕ of V by defining

𝒢 ((V, x)) if 𝒢 ((V, x)) < k,


f (x) = { (18.1)
f ̂(x) + k otherwise.

Let x ∈ V with 𝒢 (V, x) = ∞(𝒜) for some 𝒜 containing 0, . . . , k − 1 but not k. Then
by applying Theorem 3.1 k times we have x ∈ Rk (V) and 𝒢 ((Rk (V), x)) = ∞(ℬ) where
ℬ = 𝒜 − k. In particular, 0 ∉ ℬ (i. e., the position x in Rk (V) is a draw). We wish to show
that there are mex labelings f , f ′ of V such that f (x) = k and f ′ (x) ≠ k. In light of (18.1),
it is enough to show that there are mex labelings f ̂ and f ̂′ of Rk (V) such that f ̂(x) = 0
and f ̂′ (x) > 0.
Since x is a draw in Rk (V), we would like to use the same approach as in the
case k = 0. The situation is more complicated since the graph Rk (V) may not be con-
nected. However, the graph Rk (V) is a union of finitely or countably many disjoint
trees. Any labeling that restricts to a mex labeling of each tree component is a mex
labeling of the whole graph. So it suffices to find mex labelings of the tree component
of Rk (V) that contains x, one of which assigns value 0 to x, and another of which as-
signs strictly positive value to x. This indeed can be done using the same compactness
argument used in the case k = 0.
This completes the proof of part (b).

4 Random game-trees
4.1 Galton–Watson trees
A Galton–Watson (or Bienaymé) branching process is constructed as follows. We fix
some offspring distribution, which is a probability distribution p = (pk , k ∈ ℕ) on the
nonnegative integers. The process begins with a single individual, called the root. The
root individual has a random number of children distributed according to the offspring
distribution, which form generation 1. Then each of the members of generation 1 has a
number of children according to the offspring distribution, forming generation 2, and
356 | J. B. Martin

so on. All family sizes are independent. See, for example, [6] for a basic introduction,
and [11] for much more depth including a rigorous construction.
We derive a directed graph from the process by regarding each individual as a ver-
tex and putting an arc to each child from its parent. In this way, each vertex of the
graph has in-degree 1, except for the root which has in-degree 0. We call the resulting
graph a Galton–Watson tree. This tree has a natural self-similarity property: condi-
tional on the number of the children of the root being k, the subtrees rooted at those
children are independent, and each one has the distribution of the original Galton–
Watson tree.
We assume always that p0 > 0, so that the tree can have terminal vertices.
A key role in what follows will be played by the probability generating function of
the offspring distribution, defined by

ϕ(s) = ∑ pk sk .
k≥0

The function ϕ is strictly increasing on the interval [0, 1], and maps [0, 1] bijectively to
the interval [p0 , 1].
A fundamental result is a criterion for the tree to be infinite in terms of the mean
μ = ∑k≥0 kpk = ϕ′ (1) of the offspring distribution p. Excluding the trivial case p1 = 1
(where with probability 1 the tree consists of a single path), we have that whenever
μ ≤ 1, the tree is finite with probability 1, and whenever μ > 1, there is positive proba-
bility for the tree to be infinite.
If d = sup{k : pk > 0} is finite, then we say that the offspring distribution has max-
imum out-degree d. Otherwise we say that the offspring distribution has unbounded
vertex degrees.

4.2 Galton–Watson games


We will consider Galton–Watson games, that is, games whose directed graphs are
Galton–Watson trees T.
We start with a very simple lemma, which helps simplify the language.

Lemma 4.1. Consider a Galton–Watson tree T with root o. Let 𝒞 be any set of possible
Sprague–Grundy values. The following are equivalent:
(a) ℙ(𝒢 ((T, o)) ∈ 𝒞 ) > 0;
(b) ℙ(𝒢 ((T, u)) ∈ 𝒞 for some u ∈ T) > 0.

For example, the tree T is draw-free with probability 1 if and only if the probability
that the root is drawn is 0. So we do not need to distinguish carefully between saying
that “the tree has draws with positive probability” and that “the root is drawn with
Extended Sprague–Grundy theory for locally finite games | 357

positive probability”. More generally, the tree T is k-stable with probability 1 if and
only if the probability that 𝒢 (T, o) = ∞(𝒜) for some 𝒜 not containing {0, 1, . . . , k} is 0.

Proof of Lemma 4.1. Trivially, (a) implies (b). On the other hand, if (a) fails, so that
ℙ(𝒢 (T, o)) ∈ 𝒞 = 0, then the self-similarity of the Galton–Watson tree, the fact that the
tree has at most countably many vertices and the countable additivity of probability
measures combine to give that also ℙ(𝒢 ((T, u)) ∈ 𝒞 for some u ∈ T) = 0, so that (b)
also fails.

The question of when a Galton–Watson game has positive probability to be a draw


was considered in [9].
Let 𝒫n be the set of vertices from which the second player has a winning strategy
that guarantees to win within 2n moves (n by each player), and let Pn be the probability
that o ∈ 𝒫n . Note that o ∈ 𝒫n if and only if for every child u of o, u itself has a child
in 𝒫n−1 . This leads to the following recursion for the probabilities Pn in terms of the
generating function:.

Pn = 1 − ϕ(1 − ϕ(Pn−1 )). (18.2)

Now let P be the probability that o ∈ 𝒫 . We have P = limn→∞ Pn . Taking limits in (18.2)
and using the fact that the generating function ϕ is continuous and increasing on
[0, 1], we obtain part (a) of the following result. A similar approach involving the
probability of winning strategies for the first player gives part (b). For full details,
see [9].

Proposition 4.2 (Theorem 1 of [9]). Define the function h : [0, 1] → [0, 1] by

h(s) = 1 − ϕ(1 − ϕ(s)). (18.3)

(a) P := ℙ((T, o) ∈ 𝒫 ) is the smallest fixed point of h in [0, 1].


(b) If N := ℙ((T, o) ∈ 𝒩 ), then 1 − N is the largest fixed point of h in [0, 1].

Corollary 4.3. D := ℙ((T, o) ∈ 𝒟) = 1 − N − P is positive if and only if the function h


defined by (18.3) has more than one fixed point in [0, 1].

Note that h defined in (18.3) is the second iteration of the function 1 − ϕ. The func-
tion 1 − ϕ is continuous and strictly decreasing, mapping [0, 1] to [1 − p0 , 0]. It follows
that 1 − ϕ has precisely one fixed point in [0, 1] and that this fixed point is also a fixed
point of h. So Corollary 4.3 tells us that the game has positive probability of draws if
and only if h has further fixed points which are not fixed points of 1 − ϕ.
Two particular families of offspring distributions had been considered earlier. The
Binomial(2, p) case was studied by Holroyd [8]. The case of the Poisson offspring family
is closely related to the analysis of the Karp–Sipser algorithm used to find large match-
ings or independent sets of a graph, which was introduced by Karp and Sipser [10];
358 | J. B. Martin

Figure 18.4: An illustration of the phase transition from the nondraw to the draw region for
Poisson(λ) offspring distributions (see Example 4.4). The two plots show the function h(s) − s for
s ∈ [0, 1], where h is defined by (18.3). The fixed points of h are those s where h(s) − s crosses the
horizontal axis. On the left, λ = 2.7, just below the critical point λ = e; the function h has just one
fixed point. On the right, λ = 2.8, just above the critical point; now h has three fixed points.

the link to games is not described explicitly in that paper, but the choice of notation
and terminology makes clear that the authors were aware of it.
One particular focus of [9] was on the nature of the phase transitions between
the set of offspring distributions without draws and the set of offspring distributions
with positive probability of draws. This transition can be either continuous or dis-
continuous. Without going into precise details, we illustrate with a couple of exam-
ples.

Example 4.4 (Poisson distribution – continuous phase transition). The Poisson(λ)


offspring family was considered in Proposition 3.2 of [9]. The game has probabil-
ity 0 of a draw if λ ≤ e and positive probability of a draw if λ > e. The phase transition
is illustrated in Figure 18.4. For λ ≤ e, the function h has only one fixed point, whereas
for λ > e, h has three fixed points. The additional fixed points emerge continuously
from the original fixed point as λ goes above e. Note that the probability of a draw at
the critical point itself is 0; more strongly, we have that the draw probability ℙ(o ∈ 𝒟)
is a continuous function of λ.

Example 4.5 (A discontinuous phase transition). Consider a family of offspring distri-


butions with p0 = 1 − a, p2 = a/2, and p10 = a/2, where a ∈ [0, 1]. This family is used
in the proof of Proposition 4(i) of [9]. Again, there is some critical point ac ≈ 0.979
such that there is positive probability of a draw for a > ac and not for a < ac . How-
ever, unlike in the Poisson case above, at the critical point itself the function h already
has three fixed points, and the probability ℙ(o ∈ 𝒟) jumps discontinuously from 0
for a < ac to approximately 0.61 at a = ac itself. The difference in the nature of the
emergence of the additional fixed points of h can be seen by comparing Figures 18.4
and 18.5.
Extended Sprague–Grundy theory for locally finite games | 359

Figure 18.5: The phase transition for the family of offspring distributions given in Example 4.5 with
p0 = 1 − a, p2 = a/2, and p10 = a/2. Again, the function h(s) − s is shown for s ∈ [0, 1]. On the left,
a = 0.977, and on the right, a = 0.979 ≈ ac . Unlike in Figure 18.4, at the critical point, there are
already multiple fixed points of h; at ac , the draw probability jumps from 0 to a positive value around
0.681, which is the distance between the minimum and maximum fixed points of h.

4.3 Existence of infinite rank vertices in Galton–Watson games


Now we go beyond the question of whether draws have positive probability, to ask
more generally about the extended Sprague–Grundy values that occur in a Galton–
Watson game. A specific question will be whether, when draws are absent, there are
still some infinite rank positions. As suggested by Corollary 3.2, we can investigate the
k-stability of the tree T by looking at whether draws occur for the reduced trees Rk (T).
The reduction operator behaves particularly nicely in the setting of a Galton–Watson
tree.

Theorem 4.6. Consider a Galton–Watson tree T whose offspring distribution (pn , n ≥ 0)


has probability generating function ϕ.
Suppose the tree is draw-free with probability 1. Let P be the probability that the root
o is a 𝒫 -position.
Condition on the event 𝒪(o) = 𝒩 , and consider the graph obtained by removing
all the 𝒫 -positions. Let T (1) denote the component connected to the root o in this graph.
Then T (1) is again a Galton–Watson tree rooted at o, whose offspring distribution has
the probability generating function

1
ϕ(1) (s) = [ϕ(P + s(1 − P)) − ϕ(s(1 − P))]. (18.4)
1−P

Proof. Since we assume that T has no draws, each vertex of T is either a 𝒫 -position or
an 𝒩 -position. The type of a vertex is determined by the subtree rooted at that vertex.
Conditionally on the number of children of the root, the subtrees rooted at each child
are independent, and each has the same distribution as the original tree. In particular,
each child is independently a 𝒫 -position with probability P and an 𝒩 -position with
probability 1 − P.
360 | J. B. Martin

This gives us a two-type Galton–Watson process. We have the familiar recursion


that a vertex is an 𝒩 -position if and only if at least one of its children is a 𝒫 -position.
We condition on the root being of type 𝒩 and retain only its 𝒩 -type children, the
𝒩 -type children of those children, and so on. This gives a one-type Galton–Watson
process, and its offspring distribution is the distribution of the number of 𝒩 -type chil-
dren of the root in the original process, conditionally on the root having type 𝒩 .
The probability that the root has k 𝒫 -children and m 𝒩 -children is

m+k k
pm+k ( ) P (1 − P)m .
k

We can sum over k ≥ 1 to obtain the probability that the root is of type 𝒩 and has m
𝒩 -children. Finally, we can condition on the event that the root has type 𝒩 (which
has probability 1 − P) to obtain that the conditional probability that the root has m
𝒩 -children given that it has type 𝒩 is

1 ∞ m+k k
p(1)
m := ∑p ( ) P (1 − P)m .
1 − P k=1 m+k k

Finally, we want to calculate the probability generating function ϕ(1) (s) := ∑m≥0 sm p(1)
m
of this distribution. This can easily be done using the binomial theorem to arrive at the
form given in (18.4).

Combining Corollary 3.2 and Theorem 4.6 is the key to studying the infinite-rank
vertices of our Galton–Watson tree T; see the strategy described at the beginning of
Section 5.
We finish this section with a result about the possible infinite Sprague–Grundy
values that can occur in a Galton–Watson game. Essentially, the value ∞(𝒜) has posi-
tive probability to appear for every finite 𝒜 that is not ruled out either by k-stability or
by finite maximum vertex degree. Most notably, part (a)(i) says that for a tree that has
draws and for which the offspring distribution has infinite support, all finite 𝒜 have
positive probabilities.

Proposition 4.7. Consider the game on a Galton–Watson tree.


(a) Suppose there is positive probability of a draw.
(i) If the vertex degrees are unbounded, then for any finite 𝒜 ⊂ ℕ, there is positive
probability that 𝒢 (o) = ∞(𝒜).
(ii) If the maximum out-degree is d, then there is positive probability that 𝒢 (o) =
∞(𝒜) if and only if 𝒜 ⊂ {0, 1, . . . , d} with |𝒜| ≤ d − 1.
(b) For k ≥ 1, suppose that the tree is (k − 1)-stable with probability 1 but has positive
probability not to be k-stable.
(i) If the vertex degrees are unbounded, then for any finite 𝒜 ⊂ ℕ containing
{0, . . . , k − 1}, there is positive probability that 𝒢 (o) = ∞(𝒜).
Extended Sprague–Grundy theory for locally finite games | 361

(ii) If the maximum out-degree is d, then there is positive probability that 𝒢 (o) =
∞(𝒜) if and only if {0, 1, . . . , k − 1} ⊆ 𝒜 ⊂ {0, 1, . . . , d} with |𝒜| ≤ d − 1.

Proof. First, we note that all finite Sprague–Grundy values have positive probabilities,
up to the maximum out-degree d if there is one. This is easy by induction. We know that
value 0 is possible since any terminal position has value 0. If values 0, 1, . . . , k − 1 are
possible, and if it is possible for the root to have degree k or larger, then there is positive
probability that the set of values of the children of the root is precisely {0, 1, . . . , k − 1},
giving value k to the root as required.
Now for part (a), since draws are possible, the value ∞(ℬ) has positive probabil-
ity for some ℬ not containing 0. In that case, there is positive probability for all the
children of the root to have value ∞(ℬ), and then the root has value ∞(0).
So the value ∞(0) has positive probability. Now if 𝒜 is any finite set such that
the number of children of the root can be as large as |𝒜| + 1, then there is positive
probability that the set of values of the children of the root is precisely 𝒜 ∪ {∞(0)}, and
in that case the value of the root is ∞(𝒜) as required.
Finally, if |𝒜| is greater than or equal to the maximum degree, then the value ∞(𝒜)
is impossible, since any vertex with such a value must have at least one child with
value m for each m ∈ 𝒜, and additionally at least one child with infinite rank.
We can derive the result for part (b) by applying part (a) to the Galton–Watson
tree T (k) obtained by conditioning the root to have Sprague–Grundy value not in
{0, 1, . . . , k − 1} and removing all the vertices with values {0, 1, . . . , k − 1} from the graph,
as described above. Theorem 3.1 tells us that if the resulting tree has positive proba-
bility to have a node with value ∞(𝒜), then the original tree has positive probability
to have a node with value ∞(ℬ) where ℬ = {b ≥ k : b − k ∈ 𝒜} ∪ {0, 1, . . . , k − 1}, and
the desired results follow.

Remark 4.8. Suppose we have a Galton–Watson tree T with positive probability to be


infinite, and a set 𝒞 of Sprague–Grundy values with ℙ(𝒢 (o) ∈ 𝒞 ) > 0. A straightforward
extension of Lemma 4.1 says that conditionally on T being infinite, with probability 1,
there exists u ∈ T with 𝒢 (u) ∈ 𝒞 .
Combining with Proposition 4.7, we get the following appealing property. If T
has unbounded vertex degrees and positive probability of draws, then conditionally
on T being infinite, with probability 1, vertices with every possible extended Sprague–
Grundy value are found in the tree.

5 Examples
First, we lay out how to use the results of the previous sections to address the question
of which infinite-rank Sprague–Grundy values have positive probability for a given
Galton–Watson tree T.
362 | J. B. Martin

Let ϕ be the probability generating function of the offspring distribution of T. To


examine whether T can have draws, we apply the criterion given in Corollary 4.3: T is
draw-free with probability 1 if and only if the function h(s) = 1 − ϕ(1 − ϕ(s)) has a
unique fixed point.
If so, then we use the procedure in Theorem 4.6. We condition the root to be
an 𝒩 -position, remove all the 𝒫 -positions, and retain the connected component of
the root to obtain a new Galton–Watson tree T (1) with an offspring distribution whose
probability generating function is ϕ(1) . We then examine whether or not this new
generating function gives a draw-free tree.
If it does, then we can repeat the procedure again, producing a new generating
function which we call ϕ(2) , corresponding to removing the positions with Sprague–
Grundy values 0 and 1 from the original tree.
If we perform k reductions and still have a draw-free tree at every step, then this
tells us that our original tree was k-stable with probability 1.
If the iteration of this procedure never produces a tree with positive probability of
a draw, then the original tree had probability 0 of having infinite-rank vertices. (Note
that, for example, if at any step, we arrive at a tree which is subcritical, i. e., whose
offspring distribution has mean less than or equal to 1 and which therefore has prob-
ability 1 to be of finite size, then we know that every further reduction must give rise
to a draw-free tree.)
We now apply this strategy to a few different examples of families of offspring
distributions. We see a surprising range of types of behavior.

Example 5.1 (Poisson case, continued). Galton–Watson trees with Poisson offspring
distribution behave particularly nicely under the graph reduction operation. This al-
lows us to give a complete analysis of the Poisson case without any need for calcula-
tions or numerical approximation.
The tree has positive probability to be infinite precisely when λ > 1. We already
saw in Example 4.4 that there is positive probability of a draw precisely when λ > e.
Suppose we are in the case λ ≤ e without draws. So each node is a 𝒫 -node (with
probability P) or an 𝒩 -node (with probability 1 − P).
By basic properties of the Poisson distribution the number of 𝒫 -children of
the root is Poisson(λP)-distributed, and the number of 𝒩 -children of the root is
Poisson(λ(1 − P))-distributed, and the two are independent.
If we condition the root to have at least one 𝒫 -child and then remove all its
𝒫 -children, then because of the independence of the number of 𝒫 -children and the
number of 𝒩 -children, we are simply left with a Poisson(λ(1 − P)) number of chil-
dren.
So we again have a Poisson Galton–Watson tree, but now with a new parameter
λ < λ. Since λ(1) < e, the new tree is still draw-free with probability 1.
(1)

Hence, to adapt the terminology of the introduction, in the Poisson case, we may
see a “blatantly infinite” game once λ > e, but for λ ≤ e, we are at worst “latently infi-
Extended Sprague–Grundy theory for locally finite games | 363

nite”. There is no λ that gives “patently infinite” behavior, whereby draws are absent,
but infinite rank vertices have positive probability.

Example 5.2 (Degrees 0 and 4). We return to the example in the introduction, where
all out-degrees are 0 or 4. We have p4 = p and p0 = 1 − p for some p ∈ (0, 1).
If p ≤ a0 := 1/4, then the mean offspring size is less than or equal to 1, and the tree
is finite with probability 1.
We can show algebraically that there is positive probability of a draw if and only
if p > a2 := 53/4 ≈ 0.83593. Namely, we can obtain that the function h defined in (18.3)
has derivative less than 1 on [0, 1] for all p ≤ a2 (except for a single point in the case
p = a2 ), and so r has just one fixed point for such p. Meanwhile, for p > a2 , there is a
fixed point s∗ of the function 1 − ϕ for which h′ (s∗ ) > 1, and this can be used to show
that r has at least two further fixed points. Corollary 4.3 then gives the result.
Between a0 and a2 there exist no draws, but the tree is infinite with positive prob-
ability, so we may ask whether there can exist positions with infinite rank.
Numerically, we observe a phase transition around the point a1 ≈ 0.52198. For
p ≤ a1 , we know that the tree T has zero probability of a draw, and we observe that
the same is also true for the trees T (1) and T (2) (their maximum out-degrees are 3 and
2, respectively, so their generating functions ϕ(1) and ϕ(2) are cubic and quadratic,
respectively. The tree T (3) has vertices of out-degrees only 0 and 1, and will also be
finite with probability 1, so we do not need to examine T (k) for any higher k).
Hence for p ∈ (a0 , a1 ], we have the “latently infinite” phase where all Sprague–
Grundy values are finite with probability 1.
However, for p ∈ (a1 , a2 ], we observe that the function h(1) (s) := 1 − ϕ(1) (1 − ϕ(1) (s))
has more than one fixed point. Consequently, there is positive probability of a draw
in the tree T (1) . The tree T has positive probability not to be 1-stable and so to have
positions of infinite rank.
The behavior of h, h(1) , and h(2) around the phase transition point p = a1 is shown
in Figure 18.6. Although the precise nature and location of this phase transition is only
found numerically, it is not hard to show rigorously that for p just above a0 , the func-
tions h(1) and h(2) have only one fixed point, whereas for p just below a2 , the function
h(1) has more than one fixed point, so that the family of distributions does display
all four of the “finite”, “latently infinite”, “patently infinite”, and “blatantly infinite”
types of behavior.

Example 5.3 (Geometric case). We now consider the family of geometric offspring dis-
tributions with pk = qk (1 − q) for k = 0, 1, 2, . . . , for some q ∈ (0, 1).
Rather surprisingly, there is no q for which draws have positive probability! See,
for example, Proposition 3(iii) of [9]. (This shows, for example, that the property of
having positive probability of draws is not monotone in the offspring distribution. If
we take any λ > e, then as discussed above, the Poisson(λ) distribution has positive
probability of draws, but for q sufficiently large, this distribution is stochastically dom-
inated by the Geometric(q) distribution, which has no draws.)
364 | J. B. Martin

Figure 18.6: The case of the 0-or-4 distribution from Example 5.2 with p = 0.52198 ≈ a1 . From left to
right the three graphs show the functions h(s) − s, h(1) (s) − s, and h(2) (s) − s. As p moves through
the critical point a1 , the function h(1) acquires multiple fixed points. For p ≤ a1 , the tree has only
finite-rank vertices. For p > a1 , the tree no longer has probability 1 to be 1-stable, and, for example,
the Sprague–Grundy value ∞(0) has positive probability.

Figure 18.7: The geometric case of Example 5.3 with q = 0.91 ∈ [q2 , q3 ). As in Figure 18.6, we plot the
functions h(s)−s, h(1) (s)−s, and h(2) (s)−s. The functions h and h(1) have unique fixed points, but the
function h(2) has multiple fixed points; so the tree has probability 1 to be 1-stable but has probability
less than 1 of being 2-stable.

However, other interesting phase transitions for the geometric family do occur. Numer-
ically, we observe that there are critical values q0 = 1/2, q1 ≈ 0.88578, q2 ≈ 0.88956,
and q3 ≈ 0.923077 such that the following hold.
– For q ≤ 0.5, the tree is finite with probability 1.
– For q ∈ (0.5, q1 ], there are infinite paths with positive probability, but the tree
is 3-stable with probability 1. In fact, for q sufficiently close to 0.5, the tree T (1)
is finite with probability 1, and so in fact the tree is k-stable for all k, that is, all
positions have finite rank (the latently infinite phase). It seems plausible that in
fact the latently infinite phase continues all the way to q1 , but we do not know
how to demonstrate that.
– For q ∈ (q1 , q2 ], with positive probability the tree is not 3-stable; however, it con-
tinues to be 2-stable.
– For q ∈ (q2 , q3 ], with positive probability the tree is not 2-stable; however, it con-
tinues to be 1-stable (see Figure 18.7).
– For q ≥ q3 , with positive probability the tree is not 1-stable (but as we know, it
continues to be 0-stable or, in other words, draw-free for all q).

Except for the transition at q0 , the precise nature and location of all the phase transi-
tions above are only found numerically. However, with a sufficiently precise analysis,
Extended Sprague–Grundy theory for locally finite games | 365

we could rigorously establish in each case a smaller interval on which the claimed
behavior holds (for example, we could find some subinterval of the claimed interval
(q2 , q3 ) on which h(1) has only one fixed point, whereas h(2) has more than one fixed
point).

In summary, the three families in Examples 5.1–5.3 show a wide variety of behav-
iors. In the Poisson case, we have the existence of draws whenever we have the exis-
tence of positions with infinite rank. In the 0-or-4 case, there is additionally a phase
with infinite rank vertices but no draws. In the geometric case, it is the phase with
draws that is missing; however, we see additional phase transitions losing 3-stability,
2-stability, and 1-stability step by step as the parameter increases.
We end with a question.

Question 5.4. Does there exist for every k ∈ ℕ an offspring distribution for which the
Galton–Watson tree is k-stable with probability 1, but nonetheless infinite rank posi-
tions exist with positive probability? Numerical explorations have so far only produced
examples up to k = 2 (for example, the Geometric(q) case with q ∈ (q1 , q2 ] described
above).

Bibliography
[1] E. R. Berlekamp, J. H. Conway, and R. K. Guy, Winning Ways for Your Mathematical Plays,
Volume 2, 2nd edition, CRC Press, 2003.
[2] A. S. Fraenkel and Y. Perl, Constructions in combinatorial games with cycles, in Infinite and
Finite Sets (to Paul Erdős on his 60th birthday), Volume 2, ed. A. Hajnal, R. Rado, and V. Sós,
pp. 667–699, North-Holland, 1975.
[3] A. S. Fraenkel and O. Rahat, Infinite cyclic impartial games, Theor. Comp. Sci. 252 (2001),
13–22.
[4] A. S. Fraenkel and U. Tassa, Strategy for a class of games with dynamic ties, Comput. Math.
Appl. 1 (1975), 237–254.
[5] A. S. Fraenkel and Y. Yesha, The generalized Sprague–Grundy function and its invariance under
certain mappings, J. Comb. Thy. A 43 (1986), 165–177.
[6] G. Grimmett and D. Welsh, Probability: an Introduction, 2nd edition, Oxford University Press,
Oxford, 2014.
[7] R. K. Guy and C. A. B. Smith, The G-values of various games, Math. Proc. Camb. Phil. Soc. 52
(1956), 514–526.
[8] A. E. Holroyd, Percolation Beyond Connectivity, PhD thesis, Univ. Cambridge, 2000.
[9] A. E. Holroyd and J. B. Martin, Galton–Watson games, Random Structures & Algorithms, 59(4)
(2021), 495–521. DOI 10.1002/rsa.21008.
[10] R. M. Karp and M. Sipser, Maximum matching in sparse random graphs, in 22nd Annual
Symposium on Foundations of Computer Science, pp. 364–375, IEEE, 1981.
[11] J.-F. Le Gall, Random trees and applications, Probab. Surveys 2 (2005), 245–311.
[12] A. N. Siegel, Combinatorial Game Theory, Amer. Math. Soc., Providence, Rhode Island, 2013.
[13] C. A. Smith, Graphs and composite games, J. Comb. Thy. 1 (1966), 51–81.
Ryohei Miyadera and Yushi Nakaya
Grundy numbers of impartial
three-dimensional chocolate-bar games
Abstract: Chocolate-bar games are variants of the Chomp game. Let Z≥0 be a set of
nonnegative numbers, and let x, y, z ∈ Z≥0 . A three-dimensional chocolate bar is com-
prised of a set of 1 × 1 × 1 cubes with a “bitter” or “poison” cube at the bottom of the
column at position (0, 0). For u, w ∈ Z≥0 such that u ≤ x and w ≤ z, the height of
the column at position (u, w) is min(F(u, w), y) + 1, where F is an increasing function.
We denote such a chocolate bar as CB(F, x, y, z). Two players take turns to cut the bar
along a plane horizontally or vertically along the grooves, and eat the broken pieces.
The player who manages to leave the opponent with a single bitter cube is the win-
ner. In a prior work, we characterized the function f for a two-dimensional chocolate-
bar game such that the Sprague–Grundy value of CB(f , y, z) is y ⊕ z. In this study,
we characterize the function F such that the Sprague–Grundy value of CB(F, x, y, z) is
x ⊕ y ⊕ z.

1 Introduction
Chocolate-bar games are variants of the Chomp game. A two-dimensional chocolate
bar is a rectangular array of squares in which some squares are removed throughout
the course of the game. A “poisoned” or “bitter” square, typically printed in black, is
included in some part of the bar. Figure 19.1 shows an example of a two-dimensional
chocolate bar. Each player takes turns breaking the bar in a straight line along the
grooves and then “eats” a broken piece. The player who manages to leave the oppo-
nent with a single bitter (black) block wins the game.
A three-dimensional chocolate bar is a three-dimensional array of cubes in which
a poisoned cube printed in black is included in some part of the bar. Figure 19.2 shows
an example of a three-dimensional chocolate bar.
Each player takes turns dividing the bar along a plane that is horizontal or vertical
along the grooves, and then eats a broken piece. The player who manages to leave the

Acknowledgement: The authors would like to express their thanks to the anonymous referee whose
time and patience improved the quality of this work. We also thank the Integers staff for their valuable
time.

Ryohei Miyadera, Kwansei Gakuin High School, Nishinomiya City, Japan, e-mail:
runnerskg@gmail.com
Yushi Nakaya, School of Engineering, Tohoku University, Sendai City, Japan, e-mail:
math271k@gmail.com

https://doi.org/10.1515/9783110755411-019
368 | R. Miyadera and Y. Nakaya

opponent with a single bitter cube wins the game. Examples of cut chocolate bars are
shown in Figures 19.3, 19.4, and 19.5.

Example 1.1. Here, we provide examples of chocolate bars.


(i) Example of a two-dimensional chocolate bar.

Figure 19.1: Two-dimensional chocolate bar.

(ii) Example of a three-dimensional chocolate bar.

Figure 19.2: Three-dimensional chocolate bar.

Example 1.2. There are three ways to cut a three-dimensional chocolate bar.
(i) Vertical cut.

Figure 19.3: Vertical cut.


Grundy numbers of impartial three-dimensional chocolate-bar games | 369

(ii) Vertical cut.

Figure 19.4: Another vertical cut.

(iii) Horizontal cut.

Figure 19.5: Horizontal cut.

The original two-dimensional rectangular chocolate bar introduced by Robin [1] is


comprised of a “bitter” or “poison” corner, as shown in Figure 19.6. Because the hori-
zontal and vertical grooves are independent, an m × n rectangular chocolate bar game
is structured in a manner similar to that of the game Nim, which includes heaps of
m − 1 and n − 1 stones. Therefore the chocolate-bar game (Figure 19.6) is mathemat-
ically equivalent to Nim, which includes heaps of 5 and 3 stones (Figure 19.7). The
Grundy number of the Nim game with heaps of m − 1 and n − 1 stones is (m − 1) ⊕ (n − 1);
therefore the Grundy number of this m × n rectangular bar is (m − 1) ⊕ (n − 1).
Extending the game to three dimensions, Robin [1] also presented a cubic choco-
late bar; for example, see Figure 19.8. It can be easily determined that the three-
dimensional chocolate bar in Figure 19.8 is mathematically equivalent to Nim with
heaps of 5, 3, and 5 stones. Hence the Grundy number of this 6 × 4 × 6 cuboid bar is
5 ⊕ 3 ⊕ 5.

Example 1.3. Here, we provide an example of the traditional Nim game and two ex-
amples of chocolate bars.
370 | R. Miyadera and Y. Nakaya

Figure 19.6: Rectangular chocolate bar.

Figure 19.7: Two-pile nim.

Figure 19.8: Cuboidal chocolate bar.

In this context, it is natural to search for a necessary and sufficient condition wherein a
chocolate bar may have a Grundy number calculated using the Nim-sum as the length,
height, and width of the bar.
We have previously presented the necessary and sufficient condition for a two-
dimensional chocolate bar in [2].
This paper aims to answer the following question.

Question. What is the necessary and sufficient condition under which a three-
dimensional chocolate bar may have the Grundy number (x − 1) ⊕ (y − 1) ⊕ (z − 1), where
x, y, and z are the length, height, and width of the bar, respectively?

The remainder of this paper is organized as follows. In Section 2, we briefly review


some necessary concepts of combinatorial game theory.
In Section 3, we present a summary of the research results on the two-dimensional
chocolate-bar game provided in [2] and use this result in Section 4.
In Section 4, we study a three-dimensional chocolate bar shown in Figure 19.2
and answer the above-mentioned research question. The proof of the sufficient con-
dition for a three-dimensional chocolate bar is straightforward from the result of the
Grundy numbers of impartial three-dimensional chocolate-bar games | 371

two-dimensional chocolate bar presented in [2]. However, the proof of the necessary
condition for a three-dimensional chocolate bar is more difficult to obtain.

2 Combinatorial game theory definitions and


theorem
Let Z≥0 be a set of nonnegative integers. For completeness, we briefly review some
necessary concepts from combinatorial game theory; further details may be found in
[3] or [4].

Definition 2.1. Let x and y be nonnegative integers. Expressing both in base 2, x =


∑ni=0 xi 2i and y = ∑ni=0 yi 2i with xi , yi ∈ {0, 1}, we define their nim-sum as

n
x ⊕ y = ∑ wi 2i , (19.1)
i=0

where wi = xi + yi (mod 2).

Lemma 1. Let x, y, z ∈ Z≥0 . If y ≠ z, then x ⊕ y ≠ x ⊕ z.

Proof. If x ⊕ y = x ⊕ z, then y = x ⊕ x ⊕ y = x ⊕ x ⊕ z = z.

As chocolate-bar games are impartial and without draws, only two outcome
classes are possible.

Definition 2.2. (a) A position is referred to as a 𝒫 -position if it is a winning position


for the previous player (the player who just moved) as long as the player plays
correctly at every stage.
(b) A position is referred to as an 𝒩 -position if it is a winning position for the next
player as long as the player plays correctly at every stage.

Definition 2.3. The disjunctive sum of the two games, denoted by G+H, is a supergame
in which a player may move either in G or H, but not in both.

Definition 2.4. For any position p of game G, there exists a set of positions that can
be reached in precisely one move in G, which we denote as move(p).

Remark 2.1. Note that Examples 3.1 and 3.2 are examples of a move.

Definition 2.5. (i) The minimum excluded value (mex) of a set S of nonnegative inte-
gers is the least nonnegative integer that is not in S.
(ii) Let p be a position in an impartial game. The associated Grundy number is denoted
as G(p) and is recursively defined as G(p) = mex{G(h) : h ∈ move(p)}.
372 | R. Miyadera and Y. Nakaya

Lemma 2. Let S be a set of nonnegative integers, and let mex(S) = m for some m ∈ Z≥0 .
Then {k : k < m and k ∈ Z≥0 } ⊂ S.

Proof. This also follows directly from Definition 2.5.

Lemma 3. If G(p) > x for some x ∈ Z≥0 , then there exists h ∈ move(p) such that
G(h) = x.

Proof. This follows directly from Lemma 2 and Definition 2.5.

The next result demonstrates the usefulness of the Sprague–Grundy theory in im-
partial games.

Theorem 1. Let G and H be impartial rulesets, and let GG and GH be, respectively, the
Grundy numbers of game g played under the rules of G and game h played under the
rules of H. Then the following conditions hold.
(i) For any position g of G, GG (g) = 0 if and only if g is a 𝒫 -position.
(ii) The Grundy number of position {g, h} in game G + H is GG (g) ⊕ GH (h).

For the proof of this theorem, see [3].


With Theorem 1, we can find a 𝒫 -position by calculating the Grundy numbers
and a 𝒫 -position of the sum of two games by calculating the Grundy numbers of two
games. Therefore Grundy numbers are an important research topic in combinatorial
game theory.

3 Two-dimensional chocolate bar


Here we define two-dimensional chocolate bars and present some related results.
Because the operations of cutting and defining Grundy numbers are difficult to un-
derstand in the case of three-dimensional bars, we present examples 3.1 and 3.2 of
two-dimensional chocolate bars. We present the previously reported Theorem 2 and
prove a new lemma, Lemma 4. We use Theorem 2 to prove Lemma 4 and Theorem 4
in Section 4. Further, we use Lemma 4 to prove Theorem 3 in Section 4. The em-
ployed method involves cutting three-dimensional chocolate bars into sections and
then applying Theorem 2 and Lemma 4 to these sections. Note that a section of a
three-dimensional chocolate bar is a two-dimensional chocolate bar.
We have previously determined that a necessary and sufficient condition for the
Grundy number is (m − 1) ⊕ (n − 1) when the width of the chocolate bar increases
with respect to the distance from the bitter square, where m is the maximum width
of the chocolate bar, and n is the maximum horizontal distance from the bitter part.
This result was previously published in [2] and is presented in Theorem 2 in this sec-
tion.
Grundy numbers of impartial three-dimensional chocolate-bar games | 373

Definition 3.1. A function f of Z≥0 into itself is said to be increasing if f (u) ≤ f (v) for
u, v ∈ Z≥0 such that u ≤ v.

Definition 3.2. Let f be an increasing function defined by Definition 3.1. For y, z ∈ Z≥0 ,
the chocolate bar has z + 1 columns, where the 0th column is the bitter square, and
the height of the ith column is t(i) = min(f (i), y) + 1 for i = 0, 1, . . . , z, which is denoted
as CB(f , y, z).
Thus the height of the ith column is determined by the value of min(f (i), y) + 1,
which is determined by f , i, and y.

Definition 3.3. Each player takes turns breaking the bar in a straight line along the
grooves into two pieces and eats the piece without the bitter part. The player who
breaks the chocolate bar and eats it, leaving the opponent with a single bitter block
(black block), is the winner.

We define a function f for a chocolate bar CB(f , y, z) and denote y, z as the coordi-
nates of CB(f , y, z).

Example 3.1. Let f (t) = ⌊ 2t ⌋, where = ⌊ ⌋ is the floor function. Here we present examples
of CB(f , y, z)-type chocolate bars. Note that the function f defines the shape of the bar,
and the two coordinates y and z represent the numbers of grooves above and to the
right of the bitter square, respectively.

Figure 19.9: {2, 5}.

Figure 19.10: {1, 5}.

Figure 19.11: {1, 3}.


374 | R. Miyadera and Y. Nakaya

Figure 19.12: {0.5}.

For a fixed function f , we define movef for each position {y, z} of the chocolate bar
CB(f , y, z). The set movef ({y, z}) is comprised of the positions of the chocolate bar ob-
tained by cutting the chocolate bar CB(f , y, z) once, and movef represents a special
case of move defined by Definition 2.4.

Definition 3.4. For y, z ∈ Z≥0 , we define movef ({y, z}) = {{v, z} : v < y}∪{{min(y, f (w)), w} :
w < z}, where v, w ∈ Z≥0 .

Remark 3.1. For a fixed function f , we use move({y, z}) instead of movef ({y, z}) for con-
venience.

Example 3.2. Here we elucidate movef for f (t) = ⌊ 2t ⌋. If we begin with position {y, z} =
{2, 5} in Figure 19.9 and reduce z = 5 to z = 3, then the y-coordinate (first coordinate)
becomes min(2, ⌊3/2⌋) = min(2, 1) = 1.
Therefore we have {1, 3} ∈ movef ({2, 5}), that is, we obtain {1, 3} in Figure 19.11
by cutting {2, 5}. We can easily determine that {1, 5}, {0, 5} ∈ movef ({2, 5}), {1, 3} ∈
movef ({1, 5}), and {0, 5} ∉ movef ({1, 3}). See Figures 19.9, 19.10, 19.11, and 19.12.

According to Definitions 2.5 and 3.4, we define the Grundy number of a two-
dimensional chocolate bar.

Definition 3.5. For y, z ∈ Z≥0 , we define 𝒢 ({y, z}) = mex({𝒢 ({v, z}) : v < y, v ∈ Z≥0 } ∪
{𝒢 ({min(y, f (w)), w}) : w < z, w ∈ Z≥0 }).

Definition 3.6. Let h be an increasing function defined by Definition 3.1. A function h


is said to have the NS property if h satisfies condition (a).
(a) Suppose that

z z′
⌊ i⌋ = ⌊ i⌋
2 2

for some z, z ′ ∈ Z≥0 and some natural number i. Then

h(z) h(z ′ )
⌊ ⌋ = ⌊ ⌋.
2i−1 2i−1

Theorem 2. Let h be an increasing function defined by Definition 3.6. Let 𝒢h be the


Grundy number of CB(h, y, z). Then 𝒢h ({y, z}) = y ⊕ z if and only if h has the NS property
as per Definition 3.6.
Grundy numbers of impartial three-dimensional chocolate-bar games | 375

Proof of this theorem is provided in Theorems 4 and 5 in [2].


The following new lemma for two-dimensional chocolate bars is used for three-
dimensional chocolate bars in Section 4.

Lemma 4. Suppose that h has the NS property as per Definition 3.6 and y ≤ h(z) for
y, z ∈ Z≥0 . Let

A = {y ⊕ (z − k) : k = 1, 2, . . . , z}

and

B = {min(y, h(z − k)) ⊕ (z − k) : k = 1, 2, . . . , z}.

Then A = B.

Proof. For any u, v ∈ Z≥0 with u ≤ h(v), let 𝒢h ({u, v}) be the Grundy number of
CB(h, u, v). Then by the NS property of function h and Theorem 2

𝒢h ({u, v}) = u ⊕ v. (19.2)

Let y, z ∈ Z≥0 be such that

y ≤ h(z). (19.3)

Let n be a natural number such that

2n > z, y. (19.4)

An arbitrary element of A can be represented as

y⊕i∈A (19.5)

for some i such that 0 ≤ i < z. According to inequality (19.4),


n n
𝒢h ({y, z + 2 }) = y ⊕ (z + 2 ) > y ⊕ i.

Hence, according to Lemma 3 and equation (19.2), there is

{u, v} ∈ move((y, z + 2n )) (19.6)

such that

𝒢h ({u, v}) = u ⊕ v = y ⊕ i. (19.7)

Because h(z) is increasing, according to inequality (19.3), for w = 0, 1, 2, . . . , 2n − 1,

y ≤ h(z + w). (19.8)


376 | R. Miyadera and Y. Nakaya

According to Definition 3.4,

move({y, z + 2n }) = {{y − k, z + 2n } : k = 1, 2, . . . , y}
∪ {{min(y, h(z + 2n − k)), z + 2n − k} : k = 1, 2, . . . , z + 2n }.

Therefore move({y, z + 2n }) is the union of the sets given below as (19.9), (19.10), and
(19.11).

{{y − j, z + 2n } : j = 1, 2, . . . , y}; (19.9)


{{min(y, h(z + 2 − k)), z + 2 − k} : k = 1, 2, . . . , 2 } = {{y, z + 2 − k} : k = 1, 2, . . . , 2n };
n n n n

(19.10)

which follows from inequality (19.8);

{{min(y, h(z − k)), z − k} : k = 1, 2, . . . , z}. (19.11)

From (19.4), (y − j) ⊕ (z + 2n ) ≥ 2n > y ⊕ i for j = 1, 2, . . . , y. Hence, by (19.7), (u, v) does


not belong to the set (19.9).
Because 0 ≤ i < z, according to Lemma 1, y ⊕ i ≠ y ⊕ (z + j) for j = 0, 1, 2, . . . , 2n+1 −
1; hence (u, v) does not belong to the set (19.10). Therefore, according to (19.6), {u, v}
belongs to the set (19.11); hence

u ⊕ v = min(y, h(z − t)) ⊕ (z − t) (19.12)

for some t ∈ Z≥0 such that 1 ≤ t ≤ z.


Therefore, according to (19.7) and (19.12),

y ⊕ i = u ⊕ v ∈ B. (19.13)

As expressed by relation (19.5), y ⊕ i is an arbitrary element of A; hence, according to


(19.13), A ⊂ B. According to Lemma 1, the number of elements in A is z, and the number
of elements in B is less than or equal to z. Therefore, as A ⊂ B, A = B.

Thus far, we have considered only two-dimensional chocolate bars for increas-
ing functions. However, we can similarly consider a three-dimensional chocolate bar
CB(f , y, z) for a function f , which is not increasing, by forming an increasing func-
tion f ′ such that chocolate bars CB(f , y, z) and CB(f ′ , y, z) have the same mathematical
structure in the context of the game.
For example, the chocolate bar in Figure 19.13 is constructed by a function that is
not increasing, whereas the chocolate bar in Figure 19.14 is formed by an increasing
function; however, these two chocolate bars have the same mathematical structure in
terms of the game.
Grundy numbers of impartial three-dimensional chocolate-bar games | 377

Figure 19.13: Two-dimensional chocolate bar.

Figure 19.14: Two-dimensional chocolate bar.

Therefore it is sufficient to study the case of an increasing function for two-dimen-


sional chocolate bars.

4 Three-dimensional chocolate bar


In this section, we answer the research question presented in Section 1. Theorems 3
and 4 offer proofs of sufficient and necessary conditions, respectively.

Definition 4.1. Suppose that F(u, v) ∈ Z≥0 for u, v ∈ Z≥0 . The function F is said to be
increasing if F(u, v) ≤ F(x, z) for x, z, u, v ∈ Z≥0 such that u ≤ x and v ≤ z.

By generalizing Definition 3.2 we define a three-dimensional chocolate bar.

Definition 4.2. Let F be an increasing function as in Definition 4.1.


Let x, y, z ∈ Z≥0 . A three-dimensional chocolate bar is comprised of a set of 1 × 1 ×
1-sized boxes. For u, w ∈ Z≥0 such that u ≤ x and w ≤ z, the height of the column of
position (u, w) is min(F(u, w), y) + 1, where F is an increasing function. A bitter box is
located in position (0, 0). We denote this chocolate bar as CB(F, x, y, z).

Definition 4.3. We define a three-dimensional chocolate-bar game. Each player takes


turns cutting the bar along a plane oriented horizontally or vertically along the grooves
and eats the broken piece. The player who successfully leaves the opponent with a
single bitter cube wins the game.

Example 4.1. Here we provide an example of a three-dimensional coordinate system


in Figure 19.15 and two examples of three-dimensional chocolate bars.
378 | R. Miyadera and Y. Nakaya

Figure 19.15: Three-dimensional coordinate system.

Figure 19.16: CB(F , 7, 3, 7). F (x, z) = max(⌊ 2x ⌋, ⌊ 2z ⌋).

Figure 19.17: CB(F , 5, 3, 7). F (x, z) = max(⌊ 2x ⌋, ⌊ 2z ⌋).

Next, we define moveF ({x, y, z}) in Definition 4.4 as the set containing all the positions
that can be directly reached from position {x, y, z} in one step.

Definition 4.4. For x, y, z ∈ Z≥0 , we define

moveF ({x, y, z}) = {{u, min(F(u, z), y), z} : u < x} ∪ {{x, v, z} : v < y}
∪ {{x, min(y, F(x, w)), w} : w < z}, where u, v, w ∈ Z≥0 .
Grundy numbers of impartial three-dimensional chocolate-bar games | 379

For example, when F(x, z) = max(⌊ x2 ⌋, ⌊ z2 ⌋), {5, 3, 7} ∈ moveF ({7, 3, 7}) because we
obtain the chocolate bar shown in Figure 19.17 by reducing the third coordinate of the
chocolate bar in Figure 19.16 from 7 to 5.

Remark 4.1. For a fixed function f , we use move({x, y, z}) instead of moveF ({x, y, z}) for
convenience.

Lemma 5. For any k, h, i ∈ Z≥0 , we have

k ⊕ h ⊕ i = mex({(k − t) ⊕ h ⊕ i : t = 1, 2, . . . , k}
∪ {k ⊕ (h − t) ⊕ i : t = 1, 2, . . . , h} ∪ {k ⊕ h ⊕ (i − t) : t = 1, 2, . . . , i}). (19.14)

Proof. The proof is omitted because this is a well-known fact regarding Nim-sum ⊕.
See [4, Prop. 1.4, p. 81].

Theorem 3. Let F(x, z) be an increasing function. Let gn (z) = F(n, z) and hm (x) = F(x, m)
for n, m ∈ Z≥0 . If gn and hm satisfy the NS property in Definition 3.6 for any fixed n, m ∈
Z≥0 , then the Grundy number of chocolate bar CB(F, x, y, z) is

𝒢 ({x, y, z}) = x ⊕ y ⊕ z. (19.15)

Proof. Let x, y, z ∈ Z≥0 be such that y ≤ F(x, z). We prove (19.15) by mathematical
induction and suppose that 𝒢 ({u, v, w}) = u ⊕ v ⊕ w for u, v, w ∈ Z≥0 , u ≤ x, v ≤ y, w ≤ z,
v ≤ f (u, w), with u + v + w < x + y + z.
Let

A = {x ⊕ y ⊕ (z − k) : k = 1, 2, . . . , z} (19.16)

and

A′ = {x ⊕ min(y, F(x, z − k)) ⊕ (z − k) : k = 1, 2, . . . , z}. (19.17)

As gx (z) = F(x, z) satisfies the NS property, according to Lemma 4,

A = A′ . (19.18)

Let

B = {(x − k) ⊕ y ⊕ z : k = 1, 2, . . . , x} (19.19)

and

B′ = {(x − k) ⊕ min(y, F(x − k, z)) ⊕ z : k = 1, 2, . . . , x}. (19.20)


380 | R. Miyadera and Y. Nakaya

Because hz (x) = F(x, z) satisfies the NS property, according to Lemma 4,

B = B′ . (19.21)

Let

C = {x ⊕ (y − k) ⊕ z : k = 1, 2, . . . , y}. (19.22)

By the mathematical induction hypothesis the definition of moveF in Definition 4.4,


along with equations (19.17), (19.20), and (19.22),

𝒢 ({x, y, z}) = mex({𝒢 ({x, min(y, F(x, z − k)), z − k}) : k = 1, 2, . . . , z})


∪ {𝒢 ({x − k, min(y, F(x − k, z)), z}) : k = 1, 2, . . . , x}
∪ {𝒢 ({x, y − k, z}) : k = 1, 2, . . . , y})
= mex(A′ ∪ B′ ∪ C). (19.23)

According to equations (19.21) and(19.18),

mex(A′ ∪ B′ ∪ C) = mex(A ∪ B ∪ C). (19.24)

From equations (19.16), (19.19), and (19.22) and Lemma 5 we have

mex(A ∪ B ∪ C) = x ⊕ y ⊕ z. (19.25)

From equations (19.23), (19.24), and (19.25) we have equation (19.15).

Lemma 6. Let i ∈ Z≥0 and z < z ′ .


(a)

z z′
⌊ i⌋ = ⌊ i⌋
2 2

if and only if there exists d ∈ Z≥0 such that

d × 2i ≤ z < z ′ < (d + 1) × 2i .

(b) Let

z z′
⌊ i ⌋ < ⌊ i ⌋. (19.26)
2 2

Then there exist c, s, t ∈ Z≥0 such that s ≥ i, 0 ≤ t < 2s , and

z = c × 2s+1 + t < c × 2s+1 + 2s ≤ z ′ . (19.27)


Grundy numbers of impartial three-dimensional chocolate-bar games | 381

Proof. Let z = ∑ni=0 zi 2i and z ′ = ∑ni=0 zi′ 2i . (a) follows directly from the definition of the
floor function. (b) falls into two cases according to inequality (19.26).

Case (i) Suppose that z < 2n ≤ z ′ . Let c = 0 and t = z. Then we have inequality (19.27).

Case (ii) Suppose there exists s ∈ Z≥0 such that s ≥ i and zk = zk′ for k = n, n−1, . . . , s+
1 and zs = 0 < 1 = zs′ . Then there exist c, t ∈ Z≥0 satisfying inequality (19.27).

Theorem 4. Let F(x, z) be an increasing function, and let gn (z) = F(n, z) and hm (x) =
F(x, m) for n, m ∈ Z≥0 . Suppose that the Grundy number of chocolate bar CB(F, x, y, z) is

𝒢 ({x, y, z}) = x ⊕ y ⊕ z. (19.28)

Then gn and hm satisfy the NS property in Definition 3.6 for any fixed n, m ∈ Z≥0 .

Proof. Let n ∈ Z≥0 . To prove that gn has the NS property, it suffices to show that

gn (a) g (a + 1)
⌊ ⌋ = ⌊ n j−1 ⌋
2j−1 2

for a ∈ Z≥0 such that

a a+1
⌊ j ⌋ = ⌊ j ⌋. (19.29)
2 2

To prove this by contradiction, we assume that

gn (a) g (a + 1)
⌊ j−1
⌋ < ⌊ n j−1 ⌋ (19.30)
2 2

for a ∈ Z≥0 satisfying equation (19.29). Here we assume that a ∈ Z≥0 is the smallest in-
teger satisfying equation (19.29) and inequality (19.30). According to inequality (19.30)
and Lemma 6(2), there exist i, c ∈ Z≥0 and t ∈ R such that i ≥ j − 1, 0 ≤ t < 2i , and

gn (a) = c × 2i+1 + t < c × 2i+1 + 2i ≤ gn (a + 1). (19.31)

As i + 1 ≥ j, according to equation (19.29),

a a+1
⌊ ⌋ = ⌊ i+1 ⌋.
2i+1 2

Hence, according to Lemma 6(1), for d ∈ Z≥0 ,

d × 2i+1 ≤ a < a + 1 < (d + 1)2i+1 .

Therefore we have the following inequalities:

d × 2i+1 ≤ a < a + 1 = d × 2i+1 + 2i + e < (d + 1)2i+1 (19.32)


382 | R. Miyadera and Y. Nakaya

for d, e ∈ Z≥0 such that 0 ≤ e < 2i and

d × 2i+1 ≤ a < a + 1 = d × 2i+1 + e < (d + 1)2i+1 (19.33)

for d, e ∈ Z≥0 such that 0 < e < 2i .

Case (i) If we have inequality (19.32), then

(c × 2i+1 + 2i ) ⊕ (a + 1) = (c × 2i+1 + 2i ) ⊕ (d × 2i+1 + 2i + e)


= (c ⊕ d)2i+1 + e
< (c ⊕ d)2i+1 + 2i + (t ⊕ e)
= (c × 2i+1 + t) ⊕ (d × 2i+1 + 2i + e)
= (c × 2i+1 + t) ⊕ (a + 1). (19.34)

Let g ′ (z) = min(gn (z), c × 2i+1 + t). We consider the two-dimensional chocolate bar
CB(g ′ , y, z) for z ≤ a + 1 as defined in Section 3. Let 𝒢g ′ ({y, z}) be the Grundy number
of this chocolate bar CB(g ′ , y, z). Because gn (z) is increasing, according to (19.31) for
z ≤ a,

gn (z) ≤ gn (a) = c × 2i+1 + t.

Therefore

g ′ (z) = min(gn (z), c × 2i+1 + t) = gn (z). (19.35)

Because a ∈ Z≥0 is the smallest integer that satisfies equation (19.29) and inequal-
ity (19.30), g ′ (z) satisfies the NS-property for z ≤ a. According to inequality (19.31) and
the definition of g ′ ,

g ′ (a + 1) = min(gn (a + 1), c × 2i+1 + t) = c × 2i+1 + t = g ′ (a).

Therefore g ′ (z) satisfies the NS-property for z ≤ a + 1. Then, according to Theorem 2,

𝒢g ′ ({y, z}) = y ⊕ z (19.36)

for y, z ∈ Z≥0 such that z ≤ a + 1 and y ≤ g ′ (z). By inequality (19.34) and equa-
tions (19.35) and (19.36)

(c × 2i+1 + 2i ) ⊕ (a + 1) < (c × 2i+1 + t) ⊕ (a + 1) = 𝒢g ′ ({c × 2i+1 + t, a + 1}). (19.37)

According to inequality (19.37) and Lemma 3,

(c × 2i+1 + 2i ) ⊕ (a + 1) ∈ {𝒢g ′ ({p, q}) : {p, q} ∈ moveg ′ ({c × 2i+1 + t, a + 1})}. (19.38)
Grundy numbers of impartial three-dimensional chocolate-bar games | 383

Based on Definition 3.4, moveg ′ ({c × 2i+1 + t, a + 1}) is a union of two sets. The first set
is created by reducing the first coordinate of point {c × 2i+1 + t, a + 1}, and the second is
created by reducing the second coordinate of point {c × 2i+1 + t, a + 1}. Therefore, based
on g ′ (w) ≤ c × 2i+1 + t, we obtain

{𝒢g ′ ({p, q}) : {p, q} ∈ moveg ′ ({c × 2i+1 + t, a + 1})}


= {Gg ′ ({v, a + 1}) : 0 ≤ v ≤ c × 2i+1 + t − 1},
∪ {Gg ′ ({min(c × 2i+1 + t, g ′ (w)), w}) : 0 ≤ w ≤ a}.
= {Gg ′ ({v, a + 1}) : 0 ≤ v ≤ c × 2i+1 + t − 1},
∪ {Gg ′ ({g ′ (w), w}) : 0 ≤ w ≤ a}.

Therefore, according to (19.36) and (19.38), we have

(c × 2i+1 + 2i ) ⊕ (a + 1) ∈ {Gg ′ ({v, a + 1}) = v ⊕ (a + 1) : 0 ≤ v ≤ c × 2i+1 + t − 1}, (19.39)


∪ {Gg ′ ({g ′ (w), w}) = g ′ (w) ⊕ w : 0 ≤ w ≤ a}. (19.40)

As t < 2i , according to Lemma 1,

(c × 2i+1 + 2i ) ⊕ (a + 1) ∉ {𝒢g ′ ({v, a + 1}) = v ⊕ (a + 1) : 0 ≤ v ≤ c × 2i+1 + t − 1}.

Therefore, by (19.39) and (19.40),

(c × 2i+1 + 2i ) ⊕ (a + 1) ∈ {g ′ (w) ⊕ w : 0 ≤ w ≤ a}. (19.41)

According to inequality (19.31),

c × 2i+1 + 2i ≤ gn (a + 1) = F(n, a + 1);

hence {n, c × 2i+1 + 2i , a + 1} is the position of chocolate bar CB(F, x, y, z).


Therefore, based on equation (19.28),

i+1
𝒢 ({n, c × 2 + 2i , a + 1}) = n ⊕ (c × 2i+1 + 2i ) ⊕ (a + 1). (19.42)

Then, by relation (19.41),

n ⊕ (c × 2i+1 + 2i ) ⊕ (a + 1) ∈ {n ⊕ g ′ (w) ⊕ w : 0 ≤ w ≤ a}. (19.43)

Then, according to equations (19.28) and (19.35) and the definition of gn ,

{n ⊕ g ′ (w) ⊕ w : 0 ≤ w ≤ a}
= {n ⊕ gn (w) ⊕ w : 0 ≤ w ≤ a}
= {n ⊕ F(n, w) ⊕ w : 0 ≤ w ≤ a}
= {𝒢 ({n, F(n, w), w}) : 0 ≤ w ≤ a}. (19.44)
384 | R. Miyadera and Y. Nakaya

Therefore, based on equations (19.42) and (19.44) and relation (19.43), there exists w′
such that 0 ≤ w′ ≤ a and
i+1
𝒢 ({n, c × 2 + 2i , a + 1}) = 𝒢 ({n, F(n, w′ ), w′ }). (19.45)

According to inequality (19.31),

F(n, w′ ) ≤ F(n, a) = gn (a) = c × 2i+1 + t < c × 2i+1 + 2i ;

hence

{n, F(n, w′ ), w′ } = {n, min(c × 2i+1 + 2i , F(n, w′ )), w′ } ∈ move({n, c × 2i+1 + 2i , a + 1}).
(19.46)

Equation (19.45) and relation (19.46) contradict the definition of the Grundy number.
Case (ii) If we have inequality (19.33), then, as 0 < e < 2i and 0 ≤ t < 2i ,

(c × 2i+1 + t) ⊕ a = (c × 2i+1 + t) ⊕ (d × 2i+1 + e − 1)


= (c × 2i+1 + 2i ) ⊕ (d × 2i+1 + 2i + t ⊕ (e − 1)].

Therefore, by equation (19.28),


i+1
𝒢 ({n, c × 2 + t, a}) = 𝒢 ({n, c × 2i+1 + 2i , d × 2i+1 + 2i + t ⊕ (e − 1)}). (19.47)

According to inequalities (19.31) and (19.33),

c × 2i+1 + 2i ≤ gn (a + 1)
= F(n, a + 1)
≤ F(n, a + 1 + t ⊕ (e − 1))
≤ F(n, d × 2i+1 + 2i + t ⊕ (e − 1));

hence {n, c × 2i+1 + 2i , d × 2i+1 + 2i + t ⊕ (e − 1)} is the position of chocolate bar CB(F, x, y, z).
According to (19.31),

c × 2i+1 + t = gn (a) = F(n, a). (19.48)

According to (19.33),

a < d × 2i+1 + 2i + t ⊕ (e − 1). (19.49)

According to (19.48) and (19.49),

{n, c × 2i+1 + t, a} ∈ move({n, c × 2i+1 + 2i , d × 2i+1 + 2i + t ⊕ (e − 1)}).

This relation and equation (19.47) lead to a contradiction.


Grundy numbers of impartial three-dimensional chocolate-bar games | 385

Thus far, we have only considered three-dimensional chocolate bars for increas-
ing functions. However, we can similarly consider a chocolate bar CB(F, x, y, z) for
a function F that is not increasing by constructing an increasing function F ′ such that
position {x, y, z} of chocolate bar CB(F, x, y, z) and position {x, y, z} of CB(F ′ , x, y, z) have
the same bar length, height, and width and have the same mathematical structure as
a game.
For example, the chocolate bar in Figure 19.18 is constructed by a function that is
not increasing, whereas the chocolate bar in Figure 19.19 is formed by an increasing
function; however, these two chocolate bars have the same mathematical structure as
a game.

Figure 19.18: Three-dimensional chocolate bar.

Figure 19.19: Three-dimensional chocolate bar.

Therefore it suffices to study the case of an increasing function for three-dimensional


chocolate bars.

5 Unsolved problems
Certain chocolate bars remain that have not been considered.
Here we present a chocolate bar with two steps; see Figure 19.20. This chocolate
bar is represented by three coordinates x, y, z; the reduction of z may simultaneously
386 | R. Miyadera and Y. Nakaya

affect the first and second coordinates x and y. The relationships between these three
coordinates are expressed by the following two inequalities:

z+3 z+3
x≤⌊ ⌋ and y≤⌊ ⌋.
2 2

Figure 19.20: Two-dimensional chocolate bar with three coordinates.

The result in [2] can be applied to this type of chocolate bar; however, this proof may
be complicated compared to that in [2].
As another type of chocolate bar, we consider a three-dimensional bar with an up-
per and lower structure. An example of this type of chocolate bar is shown in Fig-
ure 19.21.

Figure 19.21: Three-dimensional chocolate bar with four coordinates.

The results of the present work can be used to study this type of chocolate bar. More-
over, research on the previously mentioned chocolate bar with two steps should be
conducted in the future.
The chocolate bars shown in Figures 19.20 and 19.21 appear to be simple gener-
alizations of the chocolate bars studied here and in [2]; however, they are technically
complex, and their investigation may prove challenging.
Grundy numbers of impartial three-dimensional chocolate-bar games | 387

Bibliography
[1] A. C. Robin, A poisoned chocolate problem, Math. Gaz. 73(466) (1989), 341–343.
[2] S. Nakamura, R. Miyadera and Y. Nakaya, Impartial chocolate bar games, Grundy numbers of
impartial chocolate bar games, Integers 20 (2020), #G1.
[3] M. H. Albert, R. J. Nowakowski and D. Wolfe, Lessons In Play: An Introduction to Combinatorial
Game Theory, Second Edition, A K Peters/CRC Press, Natick, MA., United States, 2019.
[4] A. N. Siegel, Combinatorial Game Theory, volume 146 of Graduate Studies in Mathematics,
American Mathematical Society, Providence, RI, 2013.
Aaron N. Siegel
On the structure of misère impartial games
Dedicated to the memory of John Conway and Dan Hoey, with whom much of the material in this
paper was joint work

Abstract: We consider the abstract structure of the monoid ℳ of misère impartial


game values. We present several new results, including a proof that the group of frac-
tions of ℳ is almost torsion-free, a method of calculating the number of distinct games
born by day 7, and some new results on the structure of prime games. We also include
proofs of a few older results due to Conway, such as the cancellation theorem, that are
essential to the analysis, but whose proofs are not readily available in the literature.

1 Misère impartial games


The study of impartial combinatorial games has a long history, beginning with Bou-
ton’s 1901 solution to Nim [2] in both normal and misère play. Whereas the normal-
play theory made steady progress in the decades after Bouton, an understanding of
misère play took far longer to come together. The challenges, as we now know, are
due primarily to the intrinsic and often counterintuitive complexity of misère combi-
natorics.
Grundy and Smith were the first to appreciate the full scope of the difficulties. With
evident frustration they wrote, in a seminal 1956 paper [6],

Various authors have discussed the “disjunctive compound” of games with the last player win-
ning (Grundy [5]; Guy and Smith [7]; Smith [11]). We attempt here to analyse the disjunctive com-
pound with the last player losing, though unfortunately with less complete success . . . .

They understood the proviso and its role in misère simplification (see Section 2.3),
and they held out hope that new techniques might be discovered that would lead to
additional simplification rules. Those hopes were dashed by Conway, who proved in
the early 1970s that if a game G cannot be simplified via the Grundy–Smith rule, then
in fact G is in simplest form. This makes application of the canonical misère theory
essentially intractable in the general case, and subsequent analyses of misère games
have focused on alternative reductions, such as the quotient construction due to Plam-
beck [8] and Plambeck and Siegel [9].
Nonetheless, the canonical misère theory—despite its limited practical utility—
gives rise to a fascinating structure theory. Define misère game values in the usual

Aaron N. Siegel, San Francisco, California, USA, e-mail: aaron.n.siegel@gmail.com

https://doi.org/10.1515/9783110755411-020
390 | A. N. Siegel

manner [10, Sect. V.1]:

G=H if o− (G + X) = o− (H + X) for all misère impartial games X,

where o− (G) denotes the misère outcome of G. The set of misère game values forms a
commutative monoid ℳ, and it is an alluring problem to study the structure of this
monoid for its own sake. Conway proved in the 1970s that ℳ is cancellative; hence it
embeds in its group of differences 𝒟, and we have a rather curious Abelian group that
arises cleanly “in nature.”
Several results on the structure of ℳ (and 𝒟) are stated in ONaG without proof.
The aim of this paper is twofold: first, to gather together what is known about the
structure of ℳ, including proofs of previously known results, into a single narrative;
and second, to extend somewhat the frontier of knowledge in this area.
In the former category, we include in particular a proof of the cancellation the-
orem, a derivation of the exact count of |ℳ6 | (the number of distinct games born by
day 6), and a proof that every game G can be partitioned, nonuniquely, into just finitely
many prime parts (a definition is given in Section 7). All three results are due to Con-
way [3], although the count of |ℳ6 | was stated inaccurately in ONaG and later cor-
rected by Thompson [12].
We also offer a smattering of new results:
– In Section 3, we show that Conway’s mate construction is not invariant under
equality. In retrospect, this should perhaps not be shocking, but it came as a sur-
prise when it was discovered.
– In Section 4 (with additional details given in Appendix A), we show how to com-
pute |ℳ7 |, the exact number of games born by day 7. (The output of this calcula-
tion, though it fits comfortably in computer memory, is too large to include in its
entirety in a journal paper.)
– In Section 6, we show that 𝒟 is “almost” torsion-free (in precise terms, it is torsion-
free modulo association, as defined in Section 6). The proof is not especially diffi-
cult, but it is not obvious; Conway previously gave thought to this question, writ-
ing in ONaG: “Further properties of the additive semigroup of games seem quite
hard to establish—if G+G = H +H is G necessarily equal to H or H +1?” (Theorem 29
implies an affirmative answer).
– In Sections 6 and 7, we extend and elaborate on Conway’s theory of prime parti-
tions. As one application of this work, we show that all games born by day 6 have
the unique partition property.

The first and last results were joint work with John Conway and Dan Hoey, conducted
in Princeton during the 2006–2007 academic year. In addition, a computational en-
gine for misère games, written by Hoey as an extension to cgsuite, has proved invalu-
able in assisting the work in this paper.
On the structure of misère impartial games | 391

2 Prerequisites
We briefly review the notation and foundational material for misère games. Results in
this section are stated without proof and are originally due to Grundy and Smith [7]
and Conway [3]. A full exposition, including proofs for all results stated here, can be
found in [10, Sects. V.1–V.3].
Formally, an impartial game is identified with the set of its options, so that G′ ∈ G
means “G′ is an option of G”. We write G ≅ H to mean that G and H are identical as
sets; thus it is possible that G = H but G ≇ H.
It is customary to define

0 = {},
∗ = {0},
∗2 = {0, ∗},
∗m = {0, ∗, ∗2, . . . , ∗(m − 1)}.

Since only impartial games are under consideration in this paper, we will follow Con-
way’s convention and drop the asterisks, writing

0, 1, 2, . . . , m, . . .

in place of

0, ∗, ∗2, . . . , ∗m, . . . .

Thus 2 + 2 is not the integer 4; it is the game obtained by playing two copies of ∗2
side-by-side.
The following conventions will also be used freely. Options of G may be writ-
ten by concatenation, rather than using set notation; for example, 632 is the game
{∗6, ∗3, ∗2}, not the integer six hundred thirty-two. Subscripts denote sums of games:
42 is ∗4 + ∗2, and 6422 is ∗6 + ∗4 + ∗2 + ∗2. Finally, we write G# (pronounced “G sharp”)
for the singleton {G}. Sometimes, we will employ a chain of sharps and subscripts,
which should be read left-to-right; for example,

2#4## = (2# + 4)## .

We write ℳ for the commutative monoid of all (finite) misère impartial game val-
ues. This monoid can be stratified according to the usual hierarchy:

Definition 1. The formal birthday b(G)


̃ of a game G is defined by

b(0)
̃ = 0; b(G)
̃ = max{b(G
̃ ′ ) + 1 : G′ ∈ G}.
392 | A. N. Siegel

The birthday b(G) is given by

b(G) = min{b(H) : H = G}.

It is clear from this definition that b(G) depends only on the value of G, so that we
may write, for n ≥ 0,

ℳn = {G ∈ ℳ : b(G) ≤ n},

the set of game values born by day n, and

ℳ = ⋃ ℳn .
n

For a set X, we will write |X| for the cardinality of X, so that |ℳn | is the number of
distinct games born by day n.

2.1 Misère simplification


The starting point for the canonical misère theory is the Grundy–Smith simplification
rule.

Definition 2. Let G ≅ {G1′ , . . . , Gk′ }. Let H be a game whose options include those of G:

H ≅ {G1′ , . . . , Gk′ , H1′ , . . . , Hl′ }.

We say that H simplifies to G provided that


(i) G ∈ Hj′ for each Hj′ (i. e., each new option Hj′ contains a reverting move back to G
itself), and
(ii) if G ≅ 0, then o(H) = N .

Clause (ii) is known as the proviso.

Theorem 3. If H simplifies to G, then G = H.

A simple but useful application of this theorem:

Theorem 4 (Misère mex rule). Let a1 , a2 , . . . , ak ∈ ℕ, and suppose that

G ≅ {a1 , a2 , . . . , ak }.

Then G = m, where m = mex{a1 , a2 , . . . , ak }, provided that at least one ai is 0 or 1.

(Here, as usual, mex(X) denotes the minimal excluded value of X: the least m ≥ 0
with m ∈ ̸ X.)
On the structure of misère impartial games | 393

2.2 The mate of G


Definition 5. The mate of G, denoted by G− , is defined by

1 if G ≅ 0,
G− = { ′ −
{(G ) : G ∈ G} otherwise.

Proposition 6. For every game G, the sum G + G− is a (misère) P -position.

Proposition 6 has the following useful corollary.

Proposition 7. For every game G, we have G ≠ G + 1.

2.3 The simplest form theorem


Now suppose G has an option G′ , which in turn has an option G′′ = G. We say that G′
is reversible through G′′ . The simplest form theorem can be stated as follows.

Theorem 8 (Simplest form theorem). Suppose that neither G nor H has any reversible
options, and assume that G = H. Then G ≅ H.

If G has no reversible options, then we say that G is in canonical form (or simplest
form). It is a remarkable fact that reversible moves can only arise in the context of
Grundy–Smith simplification.

Theorem 9. Suppose that every option of H is in canonical form and that some option
of H is reversible through G. Then H simplifies to G.

As in the partizan theory, a constructive test for equality is an essential ingredient


in the proof of the simplest form theorem. The details of this constructive test will be
independently important, so we review them here as well.

Definition 10. We say that G is linked to H (by T) and write G ⋈ H if

o(G + T) = o(H + T) = P

for some T.

Lemma 11. Let G and H be games. Then G is linked to H if and only if G = no H ′ and no
G′ = H.

Theorem 12. Let G and H be games. G = H if and only if the following four conditions
hold:
(i) G is linked to no H ′ ;
(ii) No G′ is linked to H;
(iii) If G ≅ 0, then H is an N -position;
394 | A. N. Siegel

(iv) If H ≅ 0, then G is an N -position.

Clauses (iii) and (iv) are, of course, a restatement of the proviso.

3 Concubines
Conway introduced the mate G− as a stepping stone to the simplest form theorem.
The terminology perhaps suggests invariance of form, and we might be tempted to
suppose that G = H implies G− = H − , that G−− = G, and so forth; but in this section,
we show that essentially all such assertions are false. These are not especially deep
observations, but they do not appear to have been pointed out before.
As an example, let G = (2## 1)# . It is easily seen to be in simplest form. However,
G− = (2## 0)# , and since this is an N -position, the proviso is satisfied, and the unique
option 2## 0 reverses through 0. Therefore G− = 0. Likewise, if H ≅ (2## 0)# , then we
have H = 0, but H − = (2## 1)# ≠ 1.

Definition 13. Suppose that G and H are in simplest form. We say that H is a concubine
of G if H − = G, but G− ≠ H.

Proposition 14. Every game has a concubine.

Proof. For G in simplest form, we define a game c(G) recursively by

(2## 1)# if G = 0,
c(G) = {
{c(G )} otherwise.

Since the mate of (2## 1)# is equal to 0, it is immediately clear that c(G)− = G. It remains
to show that c(G) is in simplest form. Suppose (for contradiction) some c(G) is not,
and choose G to be a minimal counterexample. Then c(G) simplifies to some game H.
If c(G′ )′ = c(G′′ ) in all cases, then G is obtained from G′′ by adding reversible op-
tions, contradicting the assumption that G is in canonical form. (The proviso is clearly
satisfied, since it is easily seen that o(G) = o(c(G)).) Otherwise, we must have some
G′ = 0 and H = c(G′ )′ = 2## 1. This means that c(G) must have 1 as an option. Since ev-
ery option of c(G) has the form c(G′ ), and since each c(G′ ) is canonical (by minimality
of G), this gives the desired contradiction.

So, for example, (2## 1)# itself has a concubine, namely

(2## (2## 1)## )# ,

and indeed there are arbitrarily long chains G, G− , G−− , G−−− , . . . of distinct games
(where it is understood that at each iteration, we pass to the canonical form).
On the structure of misère impartial games | 395

|ℳ0 | = 1
|ℳ1 | = 2
|ℳ2 | = 3
|ℳ3 | = 5
|ℳ4 | = 22
|ℳ5 | = 4171780

24171780 − 22096640 − 22095104 − 22094593 − 22094080 − 22091523 − 22091522


|ℳ6 | = − 22088960 − 22088705 − 22088448 − 22088193 − 22086912 − 22086657
− 22086401 − 22086145 − 22085888 − 22079234 + 21960962 + 21
Figure 20.1: The first few values of |ℳn |.

4 Games born by day n


A central goal in the structure theory of (any particular class of) combinatorial games
is to count the number of game values born by day n. In the most familiar case—
normal-play partizan games—and many others, obtaining exact counts is a tedious
combinatorial problem, with the only known techniques requiring an exhaustive enu-
meration of the values being counted. In the impartial misère theory, by contrast, the
simplification rule is surprisingly rigid, and we can write down a recurrence relation
for |ℳn | that depends only on an enumeration of ℳn−2 .
The values |ℳ0 | through |ℳ5 | were first calculated by Grundy and Smith [7] in
1956, although |ℳ5 | = 4171780 was not proven correct until Conway, armed with the
simplest form theorem, came along 20 years later. These initial values are summarized
in Figure 20.1. Conway added |ℳ6 | to the list, although the figure initially given in
ONaG [3] contained errors; the corrected count, also shown in Figure 20.1, was first
reported by Thompson [12] in 1999. In this section, we give ad hoc derivations of each
of these numbers and then extend the list one step further to compute the exact value
of |ℳ7 |.

4.1 Games born by day 4


The earliest games can be enumerated by inspection. The only games born by day 2
are 0, 1, 2, and 1# , but 1# = 0 by the mex rule, giving |ℳ2 | = 3. On day 3, we have
additionally 3, 20, 21, and 2# , but 20 and 21 are reducible (also by the mex rule), so
there are just two new games, giving |ℳ3 | = 5.
On day 4, consider the 25 subsets of ℳ3 ; 24 of them contain only nimbers, and
the remaining 24 have 2# as an option. For the subsets containing only nimbers, the
canonical games are

0, 1, 2, 3, 4, 2# , 3# , and 32.
396 | A. N. Siegel

G |𝒮4G | Adjustment
2# 14 −214 + 1
3 10 −210 + 1
2 12 −212 + 1
1 9 −29 + 1
0 10 −210 + 1
(Proviso) +29 − 1
222 − 214 − 210 − 212 − 29 − 210 + 29 + 4 = 4171780.
Figure 20.2: The number of games born by day 5.

(All other combinations of nimbers contain either 0 or 1, and therefore reduce to a nim-
ber by the mex rule).
Now suppose 2# ∈ G and G is reducible. G must simplify to 2, since 2 is the only
option of 2# ; therefore 0 ∈ G and 1 ∈ G, and all other options of G must contain 2.
The only possibilities are 2# 10 and 2# 310. So of the 24 subsets containing 2# , two are
reducible, and the other fourteen are not. Therefore

|ℳ4 | = 8 + 14 = 22.

4.2 Games born by day 5


There are 222 subsets of ℳ4 . To compute |ℳ5 |, we subtract from this total the number
of reducible subsets of ℳ4 . Now if H ⊂ ℳ4 is reducible, then it must simplify to some
other game G ≇ H. If we take G to be in simplest form, then by the simplest form
theorem it is uniquely determined. Moreover, there must be at least one H ′ ∈ H with
G ∈ H ′ , so that necessarily G ∈ ℳ3 .
This shows that every reducible H ⊂ ℳ4 is obtained from a unique G ∈ ℳ3 by
adding reversible moves, so that H has the form

H ≅ G ∪ {H1′ , . . . , Hk′ },

with G ∈ each Hi′ .


The calculation is summarized in Figure 20.2. For each G ∈ ℳ3 , let

G
𝒮4 = {H ∈ ℳ4 : G ∈ H}.

When G ≇ 0, the added reversible moves {H1′ , . . . , Hk′ } can be any nonempty subset of
G
𝒮4G , so exactly 2|𝒮4 | − 1 games simplify to G.
When G ≅ 0, the proviso requires additionally that o(H) = N , so that at least one
of H1′ , . . . , Hk′ must be a P -position. So if H1′ , . . . , Hk′ are all N -positions with 0 ∈ Hi′ ,
On the structure of misère impartial games | 397

G |𝒮5G | Adjustment
21 13 11 9 8 8
2# 3210 2 −2 −2 −2 −2 −2 −22085888 + 1
2# 321 221 − 213 − 211 − 29 − 28 −22086144 + 1
2# 320 221 − 213 − 211 − 29 − 28 −22086144 + 1
21 13 11 9
2# 32 2 −2 −2 −2 −22086400 + 1
2# 31 221 − 213 − 29 − 28 −22088192 + 1
21 13
2# 30 2 −2 − 29 − 28 −22088192 + 1
21 13 9
2# 3 2 −2 −2 −22088448 + 1
2# 210 221 − 213 − 211 − 28 − 28 −22086400 + 1
2# 21 221 − 213 − 211 − 28 −22086656 + 1
21 13 11
2# 20 2 −2 −2 − 28 −22086656 + 1
21 13 11
2# 2 2 −2 −2 −22086912 + 1
2# 1 221 − 213 − 28 −22088704 + 1
2# 0 221 − 213 − 28 −22088704 + 1
2## 221 − 213 −22088960 + 1
32 221 − 211 − 29 −22094592 + 1
21
3# 2 − 29 −22096640 + 1
4 221 − 211 − 29 − 28 − 28 −22094080 + 1
3 221 − 211 − 28 − 28 −22094592 + 1
21 11
2# 2 −2 −22095104 + 1
2 221 − 28 − 28 −(214 −1) −(210 −1) −22079234 + 1
1 221 − 29 −(212 −1) −(210 −1) −22091522 + 1
0 221 −(212 −1) −(210 −1) −(29 −1) −22091523 + 1
21
(Proviso) 2 −(212 −1) −(210 −1) −217 +21960962 − 1

|ℳ6 | = 24171780 − 22096640 − 22095104 − 22094593 − 22094080 − 22091523 − 22091522


− 22088960 − 22088705 − 22088448 − 22088193 − 22086912 − 22086657
− 22086401 − 22086145 − 22085888 − 22079234 + 21960962 + 21
Figure 20.3: The number of games born by day 6.

then in fact H is canonical. We must therefore add back such subsets into the count.
This gives rise to the additional “proviso” term in Figure 20.2; the exponent is the count
of N -positions in ℳ4 that contain 0 as an option.

4.3 Games born by day 6


The calculation of |ℳ6 | proceeds similarly and is shown in Figure 20.3. Just as before,
we subtract from 24171780 the number of reducible subsets of ℳ5 . Each reducible subset
H ⊂ ℳ5 is obtained from a unique G ∈ ℳ4 by adding reversible moves, and for G ≇ 0,
G
there are exactly 2|𝒮5 | − 1 such possibilities for H, where
G
𝒮5 = {H ∈ ℳ5 : G ∈ H}.

As before, the case G ≅ 0 entails an additional “proviso term,” representing those


subsets of 𝒮50 that are not reducible to 0 because they are P -positions.
398 | A. N. Siegel

24171780 − 22096640 − 22095104 − 22094593 − 22094080 − 22091523 − 22091522


( − 22088960 − 22088705 − 22088448 − 22088193 − 22086912 − 22086657 )
2 − 22086401 − 22086145 − 22085888 − 22079234 + 21960962 + 21

( 24171779 − 22085887 )
−2
( 24171779 − 22086143 + 1 )
−2
( 24171779 − 22086143 − 22085887 + 1 )
−2

⋅⋅⋅ (758656 additional terms) ⋅⋅⋅

24171779 − 22096640 − 22094592 − 22094080 − 22091522 − 22091521 − 22088448


−2
( )
− 22088193 − 22086400 − 22086145 − 22085888 − 22079233 + 21960961 + 10

24171779 − 23926530 − 22094592 − 22094080 − 22088704 − 22088192


+2
( )
− 22086656 − 22086400 − 22086144 − 22085888 − 22079234 + 9

+ 4171779

Figure 20.4: Partial expansion of the expression for |ℳ7 |.

The calculation of the critical exponents |𝒮5G | is by recursive application of the same
principle. For a given G ∈ ℳ4 , there are exactly 221 subsets of ℳ4 containing G, so we
subtract from 221 the number of reducible such subsets.
Figure 20.3 breaks down the calculation of |𝒮5G | for each G. For H ⊂ ℳ4 with G ∈ H,
there are two distinct ways that H might be reducible:
(i) H simplifies to some G′ ∈ G, so that G itself is reversible (together with various
other options of H); or
(ii) H simplifies to some other K ∈ ℳ3 with G ∈ K (so that G is not reversible, but
remains as an option of the simplified game K).

Case (i) yields one term for each G′ ∈ G, and since G′ ∈ ℳ3 , there are at most five such
terms (the maximum is achieved for G = 2# 3210). Case (ii) requires that G ∈ ℳ2 ; hence
it is only relevant for G = 0, 1, 2, explaining the special structure of those three rows in
Figure 20.3.
The precise details of how the terms in Figure 20.3 are calculated are fairly subtle.
Since the calculation of |ℳ6 | is only slightly easier than the general case, those details
are deferred until Appendix A.

4.4 Games born by day 7


The preceding arguments can be abstracted out into a set of recurrence relations for
|ℳn | that work for all n. These relations (and a proof that they work) are given in Ap-
On the structure of misère impartial games | 399

pendix A. They show in particular that |ℳ7 | has the form

2|ℳ6 | − 2a1 − ⋅ ⋅ ⋅ − 2ak + 2b + 4171779,

in which all the exponents a1 , . . . , ak , b are close to 24171779 , with b (the “proviso
term”) somewhat smaller than the others. In the initial calculation, there are pre-
cisely 4171780 terms −2ai , and by combining like terms we can reduce this number to
758660. The resulting expression is obviously too large to publish in a journal paper,
but it is small enough to fit comfortably in computer memory and is therefore easily
computable. A partial expansion is given in Figure 20.4.
The same method works in theory to compute |ℳ8 | (and higher), but the expres-
sion for |ℳ8 | would have a number of terms on the order of |ℳ6 |. In some sense, the
chained powers of two in the expression for |ℳn | encode the entire structure of ℳn−2 ,
and so we have reached the practical limit of this calculation. We now turn our atten-
tion to the abstract structure of ℳ itself.

5 The cancellation theorem


The next order of business is to prove that ℳ is cancelative.

Theorem 15 (Cancellation theorem [3]). If G + T = H + T, then G = H.

The cancellation theorem was discovered by Conway in the 1970s, and it is stated
without proof in ONaG. A proof has been published once before, in Dean Allemang’s
1984 thesis [1]. Since the proof is fairly tricky and is essential to the succeeding anal-
ysis, we give a full exposition here.

Definition 16. We say that H is a part of G if G = H + X for some X. In this case, we say
that H + X is a partition of G and X is the counterpart of H in G.

Lemma 17 (Conway [4]). The only parts of 0 are 0 and 1.

Proof. Let X + Y = 0 be any partition of 0, and assume X and Y to be in simplest form.


Now for every option X ′ ∈ X, we have X ′ + Y ⋈̸ 0. It cannot be the case that X ′′ + Y = 0,
since this would imply

X = X + (X ′′ + Y) = X ′′ + (X + Y) = X ′′ ,

contradicting the assumption that X is in simplest form. Therefore X ′ +Y ′ = 0 for some


Y ′ ∈ Y, and in particular X ′ is a part of 0.
By induction we may assume that X ′ = 0 or 1, so that the only options of X are 0
and 1. This implies that X = 0, 1, or 2. By an identical argument we also have that
Y = 0, 1, or 2. Since none of 2 + 0, 2 + 1, or 2 + 2 is equal to 0, we conclude that X = 0
or 1, and likewise for Y.
400 | A. N. Siegel

Definition 18 (Conway [4]). We say that T is cancellable if, for all G and H,
(i) G + T = H + T implies G = H, and
(ii) G ⋈ H implies G + T ⋈ H + T.

Lemma 19 (Conway [4]). If T is cancellable, then so is any part of T.

Proof. If T = X + Y, then G + X = H + X implies G + T = H + T, and G + T ⋈ H + T implies


G + X ⋈ H + X.

Lemma 20 (Conway [3]). For all T,


(a) T is cancellable, and
(b) T has only finitely many parts.

Proof. The proof is by induction on T. For T ≅ 0, (a) is trivial, and (b) follows from
Lemma 17, so assume that T ≇ 0. Since the statements are independent of the form
of T, we can furthermore assume that T is given in simplest form. We will first prove (b)
and then (a).

(b) Assume (for contradiction) that T has infinitely many distinct parts X1 , X2 , . . . ,
and write

T = X1 + Y1 = X2 + Y2 = ⋅ ⋅ ⋅ .

Since T ≇ 0, there necessarily exists an option T ′ ∈ T. For each i, we have T ′ ⋈̸ Xi + Yi ,


so either T ′′ = Xi + Yi , or T ′ = Xi′ + Yi , or else T ′ = Xi + Yi′ . The first possibility cannot
occur, since it contradicts the assumption that T is in simplest form. Furthermore,
T ′ has only finitely many parts, so T ′ = Xi + Yi′ for at most finitely many i. Thus for
infinitely many values of i, we have T ′ = Xi′ + Yi . It follows that there are m < n with
Ym = Yn . Since T ′ is cancellable, by Lemma 19 so is Ym . Since Xm + Ym = Xn + Yn , this
implies Xm = Xn , contradicting the assumption that all Xi are distinct.

(a) The proof is by induction on G and H. First, suppose G ⋈ H. To show that G +


T ⋈ H + T, it suffices to show that (G + T)′ ≠ H + T and G + T ≠ (H + T)′ . Now since
G ⋈ H, we have G′ ≠ H and G ≠ H ′ , so by induction G′ + T ≠ H + T and G + T ≠ H ′ + T.
Furthermore, by induction on T we have G + T ′ ⋈ H + T ′ , and this implies G + T ′ ≠ H + T
and G + T ≠ H + T ′ .
Next, suppose G +T = H +T. Then G′ +T ⋈̸ H +T and G +T ⋈̸ H ′ +T, so by induction
G ⋈̸ H and G ⋈̸ H ′ . To complete the proof that G = H, we must verify the proviso. By

symmetry it suffices to assume that G = 0 and show that H = 0 as well.


Since G = 0, we have T = H + T. If T = 0, then the conclusion is immediate;
otherwise, there exists an option T ′ ∈ T, and we have T ′ ⋈̸ H + T. There are three
cases.

Case 1: T ′′ = H + T. Then T ′′ = T, so T ′′ = H + T ′′ , and H = 0 by induction.


On the structure of misère impartial games | 401

Case 2: T ′ = H ′ + T. Then T is a part of T ′ . By induction T ′ is cancellable, so by


Lemma 19 so is T.
Case 3: T ′ = H +T † , where T † is an option of T (possibly distinct from T ′ ). By repeated
application of the identity T = H + T we have

T = H + T = H + H + T = H + H + H + T = ⋅⋅⋅.

Since T has finitely many parts, we must have m ⋅ H = n ⋅ H for some m < n. Since T ′ is
cancellable and H is a part of T ′ , by Lemma 19 H is cancellable. Therefore (n−m)⋅H = 0.
By Lemma 17 we have H = 0 or 1, but Proposition 7 gives H ≠ 1.
A simple corollary of the cancellation theorem will prove to be useful.

Corollary 21. Suppose G + X = Y with G in simplest form. For every option G′ ∈ G, either
G′ + X ′ = Y, or G′ + X = Y ′ .

Proof. By Theorem 12 G′ + X ⋈̸ Y. By Lemma 11 either G′′ + X = Y, or G′ + X ′ = Y, or


else G′ + X = Y ′ . In the first case, we have G′′ + X = G + X, so by cancellation G′′ = G,
contradicting the assumption that G is canonical.

6 Parts and differences


We write X = G − H to mean G = H + X. By cancellation there is at most one such X (up
to equality), so this notation is reasonable.
When we proved the cancellation theorem, we showed that every game has just
finitely many parts. In particular, this implies that for a fixed G, there are only finitely
many H such that G − H exists.

Lemma 22 (Difference lemma). If G and H are in simplest form and G − H exists, then
(a) either G − H = G′ − H ′ for some G′ and H ′ , or
(b) every G′ − H and G − H ′ exists, and G − H = {G′ − H, G − H ′ }.

Proof. Suppose G − H exists, say G = H + X, but it is not equal to any G′ − H ′ . Now


by Corollary 21, for every G′ , we have either G′ = H ′ + X or G′ = H + X ′ . However,
G′ = H ′ + X would imply G′ − H ′ = X = G − H, which we assumed is not the case.
Therefore G′ = H +X ′ , so that G′ −H exists and is an option of X. An identical argument
shows that every G − H ′ exists and is also an option of X.
Finally, for every X ′ , we have either G′ = H +X ′ or G = H ′ +X ′ , so either X ′ = G′ −H,
or X ′ = G − H ′ .

Definition 23. Let X be a part of G. We say that X is novel if every G − X ′ and every
G′ − X exists. Otherwise, we say that X is derived. We say that a partition G = X + Y is
novel if either X or Y is novel and derived if both parts are derived.
402 | A. N. Siegel

Lemma 24. If G = X + Y is derived, then there exist G′ and X ′ such that G′ = X ′ + Y.

Proof. By Corollary 21, for every G′ , we have either G′ = X ′ +Y or G′ = X +Y ′ . Likewise,


for every X ′ , we have either G′ = X ′ + Y or G = X ′ + Y ′ . Thus if the conclusion fails,
then X is novel, whence X + Y is novel.

The preceding analysis gives a constructive way to compute all partitions of


a given G (and therefore a constructive calculation or a proof of nonexistence for
G − H as well). In particular, for any partition G = X + Y, at least one of X or Y must be
a part of some G′ ∈ G; thus we can recursively compute the parts of each G′ to build a
set of candidates for the parts of G.
If a partition X + Y is novel, say with X novel, then necessarily X is a part of every
G , and Y = {G − X ′ , G′ − X}. If X + Y is derived, then X = some G′ − Y ′ and Y = some

G′ − X ′ , so we can find both X and Y among the parts of various G′ .


Through a more careful analysis, we can make this test fairly efficient and avoid
computing unnecessary sums. A complete algorithm, due jointly to Dan Hoey and the
author, is given in Appendix B.

6.1 Parity
Definition 25. We say that U is a unit if it has an inverse. If G = H + U for some unit U,
then we say that G and H are associates and write G ≈ H.

By Lemma 17 the only units are 0 and 1. Thus by Proposition 7 every game G has
exactly two associates, G and G + 1, and this induces a natural pairing among games.
We now introduce a convenient way to distinguish between the elements of each pair.

Definition 26. If (the simplest form of) G is an option of (the simplest form of) G + 1,
then we say that G is even and G + 1 is odd.

Proposition 27 (Conway [3]). Every game G is either even or odd, but not both.

Proof. Assume that G is in simplest form. If G is not even, then G must be a reversible
option of G + 1, so that G + 1 = G′ . Therefore G is odd.
Moreover, if G is even, then it is a canonical option of G+1 and hence not reversible.
Therefore G + 1 ≠ G′ for every G′ . So G cannot be both even and odd.

Proposition 28 (Conway [3]).


(a) If G and H are both even or both odd, then G + H is even.
(b) If G is even and H is odd, then G + H is odd.

Proof. Suppose G and H are both even, and assume (for contradiction) that G + H is
reversible in G + H + 1. Without loss of generality, G′ + H = G + H + 1. By cancellation,
G′ = G + 1, contradicting the assumption that G is even.
On the structure of misère impartial games | 403

The remaining cases follow immediately by substituting X + 1 for X whenever X is


odd.

6.2 The group of differences of ℳ


Let 𝒟 be the Abelian group obtained by adjoining formal inverses for all games in ℳ,
that is,

𝒟 = {G − H : G, H ∈ ℳ}

with G1 − H1 = G2 − H2 iff G1 + H2 = G2 + H1 . 𝒟 is an Abelian group, and since ℳ is


cancelative, it embeds in 𝒟.
We have seen that 1 + 1 = 0; we will now show that 1 is the only torsion element of
𝒟, so that 𝒟 is torsion-free modulo association.

Theorem 29. If n ⋅ G = n ⋅ H and G and H are both even, then G = H.

Proof. Suppose n ⋅ G = n ⋅ H, written as X in simplest form. If X = 0, then by Lemma 17


we have G = H = 0. Otherwise, there is some option X ′ ∈ X, and we have

X ′ = (n − 1) ⋅ G + G′ = (n − 1) ⋅ H + H ′ (†)

for some G′ ∈ G and H ′ ∈ H. Multiplying by n gives

n ⋅ (n − 1) ⋅ G + n ⋅ G′ = n ⋅ (n − 1) ⋅ H + n ⋅ H ′ .

Now n ⋅ G = n ⋅ H, so n ⋅ (n − 1) ⋅ G = n ⋅ (n − 1) ⋅ H, and so by cancellation

n ⋅ G′ = n ⋅ H ′ .

Now G′ and H ′ have the same parity by (†). So by induction on G and H we may assume
that G′ = H ′ . But now cancellation on (†) gives

(n − 1) ⋅ G = (n − 1) ⋅ H,

and the conclusion follows by induction on n.

Corollary 30. The only torsion element of 𝒟 is 1.

Proof. Let G − H ∈ 𝒟 and suppose n ⋅ (G − H) = 0 for some n, so that n ⋅ G = n ⋅ H.


Theorem 29 implies G ≈ H, so that G − H = 0 or 1.

In particular, 𝒟/ ≈ is a torsion-free Abelian group, but the following major ques-


tion remains open.

Question. What is the isomorphism type of 𝒟?


404 | A. N. Siegel

7 Primes
Definition 31. A part H of G is said to be proper if H ≉ 0 or G.

Definition 32. We say that G is prime if G is not a unit and has no proper parts.

Theorem 33. Every game G can be partitioned into primes.

Proof. By Lemma 20, every game has just finitely many parts. We can therefore prove
the theorem by induction on the number of proper parts of G.
If G itself is prime, then there is nothing to prove. Otherwise, we can write G =
X + Y, where X and Y are proper parts of G. Now every proper part of X is a proper
part of G, but X is not a proper part of X. Therefore X has strictly fewer proper parts
than G. By induction X has a prime partition. By the same argument so does Y, and
we are done.

It is important to note that a partition of G into primes need not be unique. For
example, we can show that

(4 + 2)# = 2 + P = 4 + Q,

where P and Q are distinct primes. (This example is originally due to Conway and Nor-
ton.) The behavior of primes can often be quite subtle. It is possible for G to have sev-
eral prime partitions of different lengths:

(4 + 2# )# = 2 + P1 + P2 = 4 + Q,

where P1 , P2 , and Q are all distinct primes. Furthermore, G + G might have a prime part
that is not a part of G. For example, if G = (4 + 2# )## , then there exists a partition of
G + G into exactly three primes.
Although these examples advise caution, we can nonetheless discern some useful
structure among primes. In the following propositions, we assume G to be given in
simplest form.

Proposition 34 (Conway [3]). If G has 0 or 1 as an option, then G is prime.

Proof. Let G′ ∈ G with G′ = 0 or 1, and suppose that G = X + Y. By Corollary 21 we have


G′ = X ′ + Y without loss of generality, but G′ ≈ 0, so by Lemma 17, Y ≈ 0, and hence
X ≈ G. Thus 0 and G are the only even parts of G.

Proposition 35. If G has a prime option, then G has at most two even prime parts.

Proof. Fix a prime option G′ . First, suppose G = X + Y + Z for any three games X, Y,
and Z. By Corollary 21 we have G′ = X ′ + Y + Z without loss of generality. Since G′ is
prime, one of Y or Z must be a unit. This shows that every partition of G involves at
most two primes.
On the structure of misère impartial games | 405

Now suppose G = P1 + P2 = Q1 + Q2 . Without loss of generality,

G′ = P1′ + P2 = Q′1 + Q2 .

Since G′ is prime, P1′ and Q′1 must be units, so P2 ≈ Q2 are equal up to a unit. By can-
cellation, P1 ≈ P2 as well.

Corollary 36. If G has at least three prime options, distinct modulo association, then G
is prime.

Proof. Suppose G has a prime option but is not itself prime. By Proposition 35, G has a
unique prime partition G = P + Q. Therefore every G′ = P ′ + Q or P + Q′ . In particular, if
G′ is prime, then G′ ≈ P or Q. So G has at most two prime options up to association.

In a related vein, we have the following proposition.

Proposition 37 (Conway [3]). If every option of G is prime, then so is G, unless G = 0,


2# , 3# , or 32.

Proof. Suppose every option of G is prime, but G is not. By Lemma 35, if G ≠ 0, then we
can write G = P + Q for suitable primes P and Q, and furthermore P and Q are unique
(up to association). Assume that each of G, P, and Q is given in simplest form.
Now G cannot be odd, since then G + 1 would be a prime option of G. So G is even,
and we may therefore assume that P and Q are both even. Now for every option P ′ , we
have either G′ = P ′ + Q or G = P ′ + Q′ . If G′ = P ′ + Q, then since G′ is prime, we must
have G′ ≈ Q, so P ′ is a unit. Suppose instead that G = P ′ + Q′ . We cannot have P ′ ≈ P,
since P is even. Furthermore, we cannot have P ′ ≈ Q: since the partition of G into P + Q
is unique, this would imply Q′ ≈ P, so that P ′′ ≈ P, contradicting the assumption that
P is in simplest form. Therefore either P ′ is a unit and Q′ ≈ G, or else Q′ is a unit and
P ′ ≈ G.
We have therefore shown that every option of P is either 0, 1, G, or G + 1. By sym-
metry the same is true for Q. However, for every option G′ , we have G′ = P ′ + Q or
G′ = P + Q′ . Since G′ is prime, this implies G′ ≈ P or Q. Therefore G cannot be an
option of both P and Q: this would imply that P (or Q) is associated with one of its fol-
lowers. Therefore one of P or Q has only 0 and 1 as options. Without loss of generality,
assume that it is P. Since P is prime, we must have P = 2.
If also Q = 2, then G = 2+2 = 32, and we are done. Otherwise, Q has G as a follower.
But this means that no follower of G can be associated with Q. Thus every option of G
is associated with P = 2, and this completes the proof.

The above propositions suggest that composite games are relatively rare. This can
be made precise by considering the number of composite games born by day n. It suf-
fices to consider only even composites, since the number of odd composites born by
day n is precisely equal to the number of even composites born by day n − 1.
406 | A. N. Siegel

22 = 2 + 2 2# 2 = 2 + 22 2#2 (2# 2)(2# 2)1


2# = 2 + 2# 2#1 22 2# 32 = 2 + 22 32 2#2 (2# 32)(2# 32)1
3# = 2 + 3# 3#1 32 2## = 2 + 2#−2 + (2## 2##1 2#−2 )0

Figure 20.5: The six composite even games born by day 4.

G Primes G Primes G Primes


2## 3 22# 3 2#1 2# 3
3## 3 2#2 3 2## 2# 3
2#1# 3 2### 4 2## 2#1 2# 3

Figure 20.6: The nine highly composite even games born by day 5, listed with number of prime parts.

There are six even composites born by day 4. These and their unique partitions are
summarized in Figure 20.5. A computer search revealed exactly 490 even composites
born by day 5. Of these, 481 have exactly two even prime parts. Figure 20.6 lists the
nine examples with more than two parts.

7.1 Unique partitions


Definition 38. We say that G has the unique partition property (UPP) if G has exactly
one prime partition (up to association).

We have already noted that (4 + 2)# does not have the UPP. In this section, we will
prove that every game born by day 6 has the UPP. Since (4 + 2)# is born on day 7, it is
therefore a minimal example.

Definition 39. We say G is a biprime if G has exactly two even prime parts.

Proposition 40. Suppose that G has a biprime option, say G′ = R+S with R and S prime.
Then either
(a) G is itself a prime or a biprime; or
(b) There is a prime P such that G = P + R + S, and this is the unique prime partition
of G; or
(c) There are primes P and Q such that G = P + R = Q + S, and these are the only two
prime partitions of G.

Proof. If G is prime, then we are in case (a), so assume that it is not.

Case 1: First, suppose G has a prime partition

G = P1 + ⋅ ⋅ ⋅ + Pk
On the structure of misère impartial games | 407

with k ≥ 3. Without loss of generality,

G′ = P1′ + P2 + ⋅ ⋅ ⋅ + Pk .

Since G′ is a biprime, it must be the case that k = 3 and P1′ is a unit, and without loss
of generality, P2 ≈ R and P3 ≈ S. Let us prove that this is the unique prime partition
of G. Let

G = Q1 + ⋅ ⋅ ⋅ + Ql

be any prime partition. By an identical argument we have l ≤ 3 and

G′ = Q′1 + Q2 + ⋅ ⋅ ⋅ + Ql .

Certainly, l ≠ 1, since G is composite. Moreover, since Q2 is a prime part of G′ , we


have Q2 ≈ R without loss of generality. Therefore l ≠ 2: by cancellation, l = 2 would
imply Q1 = P1 + R, contradicting the fact that Q1 is prime. So l = 3, and hence Q2 ≈ R,
Q3 ≈ S, and by cancellation Q1 ≈ P1 . This shows that P1 +P2 +P3 is unique, establishing
case (b).

Case 2: Next, suppose that every prime partition of G has exactly two primes. Con-
sider any such partition

G = P1 + P2 .

Without loss of generality,

G′ = P1′ + P2 .

But by the assumptions on G′ this implies P2 = R or S, and by cancellation, P1 is


uniquely determined by P2 . This shows that there are at most two such partitions, so
either G is a biprime, or else it has exactly two prime partitions into exactly two parts,
as in (c).

Corollary 41. Suppose G is a game born by day 6 without the UPP. Assume that at least
one option of G is a biprime. Then, in fact,

G = 2 + P = Q1 + Q2

for distinct primes P, Q1 , and Q2 , none of which equal 2.

Proof. This is just Proposition 40, together with the (computationally verifiable) fact
that 2 is a part of every composite game born by day 5.
408 | A. N. Siegel

Lemma 42. Suppose G is a game born by day 6 without the UPP. Suppose G has a
biprime option 2 + R. If some other option G′ does not have R as a part, then G = R + S
for some part S of G′ .

Proof. By Corollary 41 we have G = 2 + P = Q1 + Q2 , where Q1 , Q2 ≠ 2. Without loss of


generality, 2 + R = Q1 + Q′2 . Since Q1 ≠ 2, we must have Q1 = R.
Now we cannot have G′ = Q1 + Q′2 , since Q1 = R is not a part of G′ . So G′ = Q′1 + Q2 ,
whence Q2 is a part of G′ .

Theorem 43. Every game born by day 6 has the UPP.

Proof. Let G be a game born by day 6. If G has any prime options, then G is a biprime,
so it has the UPP. Likewise, if G is odd, then G + 1 is an even game born by day 5, with
the same parts as G. We know that every game born by day 5 has the UPP, so such G
must also have the UPP. Thus we need only consider even games whose options are
all composite.
Now let

𝒞 = {G ∈ ℳ5 : G has at least 3 prime parts}.

We noted previously that |𝒞 | = 10. Thus there are 210 subsets of 𝒞 , and a computer
search can rapidly verify that all of them have the UPP.
This leaves only those games with at least one biprime option. We can now apply
the following trick. Let

𝒜 = {H : H is an even prime part of some composite game born by day 5}.

Let 𝒜 + 𝒜 be the set of all pairwise sums of elements of 𝒜. If G has at least one biprime
option, then by Lemma 42 either:
(i) G ∈ 𝒜 + 𝒜; or
(ii) All options of G share a common part R ≠ 2.

It therefore suffices to exhaust all possibilities for (i) and (ii). For (i), we have |𝒜| <
500, so |𝒜 + 𝒜| < 25000. It is therefore easy to compute the set 𝒜 + 𝒜, and a simple
computation shows that for most G ∈ 𝒜 + 𝒜, we have b(G) > 6. We can then directly
show that the remaining few have the UPP.
To complete the proof, we describe how to exhaust case (ii). For each R ∈ 𝒜, let

𝒞R = {G ∈ 𝒞 : R is a part of G}.

Now ΣR |𝒞R | is small, since the elements of 𝒞 collectively have a small number of parts.
But to address case (ii), we need only consider those games whose options are subsets
of {2 + R} ∪ 𝒞R for some R ≠ 2: these are exactly the games whose options share the
On the structure of misère impartial games | 409

common factor R. We can therefore iterate over all R and all subsets of {2 + R} ∪ 𝒞R ,
checking that each possibility has the UPP.
All of the necessary computations to complete the proof have been implemented
and verified in cgsuite.

Appendix A. Recurrence relations for |ℳn |


This Appendix gives a set of recurrence relations that can be used to compute the exact
value of |ℳn |, expressed in terms of chained powers of 2. (As discussed in Section 4,
the calculation can feasibly be carried out only for n ≤ 7, but the relations continue to
hold for larger n.)
For n ≥ 0 and G, K ∈ ℳ, define:

G
ℛn = {H ⊂ ℳn−1 : H simplifies to G},
K
𝒮n = {H ∈ ℳn : K ∈ H},
G,K
ℛn = {H ⊂ ℳn−1 : H simplifies to G and K ∈ H},
𝒩n = {H ∈ ℳn : H is an N -position},
0
𝒩n = {H ∈ ℳn : H is an N -position and 0 ∈ H}.

Here ℛG,K
n is defined only when G ∈ K.

Theorem 44. For all n ≥ 2, we have the following recurrences:

∑ 󵄨󵄨󵄨ℛGn 󵄨󵄨󵄨,
󵄨 󵄨
|ℳn | = 2|ℳn−1 | −
G∈ℳn−2
G
󵄨󵄨 G 󵄨󵄨 2
|𝒮n−1 |
−1 if G ≇ 0,
󵄨󵄨ℛn 󵄨󵄨 = { |𝒮 G | 0
2 n−1 −2 |𝒩n−1 |
if G ≅ 0,
󵄨󵄨 K 󵄨󵄨
− ∑ 󵄨󵄨ℛn 󵄨󵄨 − ∑ 󵄨󵄨󵄨ℛGn 󵄨󵄨󵄨,
󵄨󵄨 G,K 󵄨󵄨 󵄨 󵄨
󵄨󵄨𝒮n 󵄨󵄨 = 2
|ℳn−1 |−1

G∈K K
G∈𝒮n−2
G
󵄨󵄨 G,K 󵄨󵄨 2 |𝒮n−1 |−1
if G ≇ 0 or K is a P -position,
󵄨󵄨ℛn 󵄨󵄨 = { |𝒮 G |−1 0
2 n−1 − 2|𝒩n−1 |−1 if G ≅ 0 and K is an N -position,
(|ℛG,K
n | is defined only when G ∈ K),
+ 1 − ∑ 󵄨󵄨󵄨ℛGn 󵄨󵄨󵄨,
󵄨 󵄨
|𝒩n | = 2 |ℳn−1 |
−2
|𝒩n−1 |

G∈𝒩n−2
󵄨󵄨 0 󵄨󵄨 󵄨 G󵄨
󵄨󵄨𝒩n 󵄨󵄨 = 2 n−1 − 2 n−1 − ∑ 󵄨󵄨󵄨ℛn 󵄨󵄨󵄨.
|ℳ |−1 |𝒩 |−1
0
G∈𝒩n−2

Proof. We discuss each equation in turn.


410 | A. N. Siegel

|ℳn |. There are 2|ℳn−1 | subsets of ℳn−1 . For each subset H ⊂ ℳn−1 , either H is canon-
ical, or H simplifies to G for a unique G ∈ ℳn−2 .
|ℛGn |. In order for H ⊂ ℳn−1 to simplify to G, it must have the exact form

H ≅ {G1 , . . . , Gk , H1 , . . . , Hl },

where G ≅ {G1 , . . . , Gk } and G ∈ each Hj . Now G1 , . . . , Gk are fixed, so games


G
with this form correspond one-to-one with subsets {H1 , . . . , Hl } of 𝒮n−1 . There
G
are in total 2|𝒮n−1 | such subsets, but we must subtract 1 to exclude the empty
set, which corresponds to G itself.
If G ≇ 0, then we are done. If G ≅ 0, then we must apply the proviso as well: in
order for H to be reducible, it must also satisfy o(H) = N , in addition to having
the proscribed form. So we must subtract off a count of H with o(H) = P .
Now o(H) = P if and only if every Hj is an N -position and there is at least
0
one Hj . There are |𝒩n−1 | such N -positions (since we also know that 0 ∈ each
0
Hj ), yielding a total of 2|𝒩n−1 | − 1 possibilities for H. Subtracting this from the
G
overall count of 2|𝒮n−1 | − 1 yields the total stated in the theorem.
K
|𝒮n |. There are 2|ℳn−1 |−1 subsets H ⊂ ℳn−1 with K ∈ H. There are two ways that such
an H might be reducible: either H simplifies to some G ∈ K (so that K reverses
through G), or H simplifies to some other G ∈ ℳn−2 with K ∈ G (so that K is
already present in the simplified form G). These are mutually exclusive, since
in the first case, K ∈ ̸ G, but in the second, K ∈ G.
|ℛG,K
n |. There are 2|ℳn−1 |−1 subsets H ⊂ ℳn−1 with K ∈ H. To be reducible, H must have
the exact form

H ≅ {G1 , . . . , Gk , H1 , . . . , Hj , K}

with G being a member of each Hj . (The assumption G ∈ K implies that K is


not among the options G1 , . . . , Gk of G.) The rest of the argument proceeds just
as for |ℛGn |, with two modifications: (i) j = 0 is no longer a special case; since
K ∈ H, it will never be true that H ≅ G. (ii) If G ≅ 0 but o(K) = P , then the
proviso does not apply, since the presence of K ensures o(H) = N .
|𝒩n |. There are 2|ℳn−1 | subsets H ⊂ ℳn−1 . There are just two ways that H might
fail to be a canonical N -position. Either H is a nonempty set of N -positions,
in which case it is a P -position, or H simplifies to some other N -position
G ∈ 𝒩n−2 . These are mutually exclusive, since no P -position can simplify to
an N -position.
|𝒩n0 |. This one is similar to the preceding argument, with one small twist. There are
2|ℳn−1 |−1 subsets H ⊂ ℳn−1 with 0 ∈ H. There are two ways that H might fail
to be a canonical N -position. Either H is a set of N -positions with 0 ∈ H
(which must always be nonempty), or H simplifies to some other N -position
G ∈ 𝒩n−2 . If H simplifies to some other G, then it must be the case that 0 ∈ G
On the structure of misère impartial games | 411

as well (since 0 can never be reversible). So these two conditions are both mu-
tually exclusive and entirely contained within the 2|ℳn−1 |−1 subsets originally
counted.

Appendix B. An algorithm for computing parts


Algorithm 1, due jointly to Dan Hoey and the author, shows how to efficiently compute
the parts of an arbitrary game G. The algorithm is structured so that whenever a part
X is detected, then so is its counterpart G − X. A record of these part–counterpart re-
lationships can be kept throughout the execution of the algorithm, so that differences
of the form G − X can be efficiently resolved.

Theorem 45. Algorithm 1 correctly computes Parts(G).

Proof. We can assume that Parts(G′ ) is correctly computed for all G′ ∈ G. Now it is
easy to see that every game that is put into 𝒳 is indeed a part of G: if a partition X + Y
is added in Step 8, then X + Y is novel; if it is added in Step 19, then the condition of
Step 18 directly witnesses the identity G = X + Y.
To complete the proof, we must show that every part of G is eventually placed
in 𝒳 . Suppose not, and let X + Y be a partition of G that the algorithm fails to find.
Assume that X + Y is minimal in the sense of b(X) + b(Y). In particular, at some stage
of the algorithm, we have X ′ , Y ′ ∈ 𝒳 for every partition of the form G = X ′ + Y ′ .
Suppose X + Y is derived. By Lemma 24 there is some G† such that G† = X + Y ′ .
Therefore X ∈ Parts(G† ), so X will be encountered in the main loop of Algorithm 1.
Since X is derived, either some G − X ′ or some G′ − X must fail to exist. If G − X ′ does
not exist, then since G = X + Y, we must have G′ = X ′ + Y for some G′ . Therefore
Y = G′ − X ′ . If G′ − X does not exist, then G′ = X ′ + Y for some X ′ , so again Y = G′ − X ′ .
In either case, Y ∈ 𝒴 (as defined in Algorithm 1). Thus it suffices to verify that G, X,
and Y jointly satisfy the condition of Step 18. By the inductive hypothesis we have
X ′ , Y ′ ∈ 𝒳 whenever G = X ′ + Y ′ , so the condition states precisely that G = X + Y,
which is true. Therefore X and Y are put into 𝒳 in Step 19, a contradiction.
Finally, suppose X + Y is novel, and assume without loss of generality that X is
novel. Then both G−X ′ and G′ −X exist, and it is easily checked that Y = {G−X ′ , G′ −X}.
Since the algorithm fails to detect X + Y at Step 8, it must be the case that X ′ ∈ ̸ 𝒳 for
some X ′ . Now b(X ′ ) < b(X), so by the inductive hypothesis this implies b(G − X ′ ) >
b(Y). Therefore G − X ′ must be a reversible option of Y = {G − X ′ , G′ − X}, so either
G − X ′′ = Y, or G′ − X ′ = Y. The former is obviously false (by cancellation), so we must
have Y = G′ − X ′ , but then the partition X + Y will be detected in Step 19 by the same
argument used in the previous paragraph.
412 | A. N. Siegel

1: 𝒳 ← 0
2: for all G ∈ G do

3: Recursively compute Parts(G′ )


4: end for
5: for all X such that X ∈ Parts(G′ ) for some G′ ∈ G do
6: if X is in every Parts(G′ ) and every X ′ ∈ 𝒳 then
7: Y ← {G − X ′ , G′ − X}
8: 𝒳 ← 𝒳 ∪ {X, Y} ⊳ X + Y is novel
9: else
10: if X ∈ ̸ Parts(G′ ) for some G′ then
11: Fix any such G′
12: 𝒴 ← {G′ − X ′ : X ′ ∈ Parts(G′ )}
13: else
14: Fix any X ′ ∈ ̸ 𝒳
15: 𝒴 ← {G′ − X ′ : X ′ ∈ Parts(G′ )}
16: end if
17: for all Y ∈ 𝒴 do
{ ∀G′ ∈ G, either G′ − Y = X ′ or G′ − X = Y ′ ; }
{ }
18: if {and ∀X ′ ∈ X, either G′ − Y = X ′ or (X ′ , Y ′ ) ∈ 𝒳 ;} then
{ }
{and ∀Y ∈ Y, either G − X = Y or (X , Y ) ∈ 𝒳 }
′ ′ ′ ′ ′

19: 𝒳 ← 𝒳 ∪ {X, Y} ⊳ X + Y is derived


20: end if
21: end for
22: end if
23: end for
24: if 𝒳 has changed then
25: Return to Step 5
26: end if
27: Parts(G) ← 𝒳

Algorithm 1: Computing the parts of G.

Bibliography
[1] D. T. Allemang, Machine computation with finite games, Master’s thesis, Trinity College,
Cambridge, 1984, http://miseregames.org/allemang/.
[2] C. L. Bouton, Nim, a game with a complete mathematical theory, Ann. of Math. 3 (1901), 35–39.
[3] J. H. Conway, On Numbers and Games, second edition, A K Peters, Ltd./CRC Press, Natick, MA,
2001.
[4] J. H. Conway, Personal communication, 2006.
[5] P. M. Grundy, Mathematics and games, Eureka 2 (1939), 6–8.
[6] P. M. Grundy and C. A. B. Smith, Disjunctive games with the last player losing, Proc. Cambridge
Philos. Soc. 52 (1956), 527–533.
On the structure of misère impartial games | 413

[7] R. K. Guy and C. A. B. Smith, The G-values of various games, Proc. Cambridge Philos. Soc. 52
(1956), 514–526.
[8] T. E. Plambeck, Taming the wild in impartial combinatorial games, Integers 5 (2005), #G05.
[9] T. E. Plambeck and A. N. Siegel, Misère quotients for impartial games, J. Combin. Theory Ser. A
115 (2008), 593–622.
[10] A. N. Siegel, Combinatorial Game Theory, volume 146 in Graduate Studies in Mathematics,
American Mathematical Society, Providence, RI, 2013.
[11] C. A. B. Smith, Compound two-person deterministic games, unpublished manuscript.
[12] C. Thompson, Count of day 6 misere-inequivalent impartial games, posted to usenet
rec.games.abstract on February 19, 1999.
De Gruyter Proceedings in Mathematics
Bruce M. Landman, Florian Luca, Melvyn B. Nathanson, Jaroslav Nešetřil,
Aaron Robertson (Eds.)
Number Theory and Combinatorics. A Collection in Honor of the Mathematics of
Ronald Graham, 2022
ISBN 978-3-11-075343-1, e-ISBN 978-3-11-075421-6

Kağan Kurşungöz, Ayberk Zeytin (Eds.)


Number Theory. Proceedings of the Journées Arithmétiques, 2019, XXXI, held at
Istanbul University, 2021
ISBN 978-3-11-076029-3, e-ISBN 978-3-11-076111-5

Aref Jeribi (Ed.)


Operator Theory. Proceedings of the International Conference on Operator Theory,
Hammamet, Tunisia, April 30–May 3, 2018
ISBN 978-3-11-059686-1, e-ISBN 978-3-11-059819-3

James A. Davis (Ed.)


Finite Fields and their Applications. Proceedings of the 14th International Conference
on Finite Fields and their Applications, Vancouver, June 3–7, 2019
ISBN 978-3-11-062123-5, e-ISBN 978-3-11-062173-0

Mahmoud Filali (Ed.)


Banach Algebras and Applications. Proceedings of the International Conference held
at the University of Oulu, July 3–11, 2017
ISBN 978-3-11-060132-9, e-ISBN 978-3-11-060241-8

Ioannis Emmanouil, Anargyros Fellouris, Apostolos Giannopoulos,


Sofia Lambropoulou (Eds.)
First Congress of Greek Mathematicians. Proceedings of the Congress held in Athens,
Greece, June 25–30, 2018
ISBN 978-3-11-066016-6, e-ISBN 978-3-11-066307-5

Paul Baginski, Benjamin Fine, Anja Moldenhauer, Gerhard Rosenberger,


Vladimir Shpilrain (Eds.)
Elementary Theory of Groups and Group Rings, and Related Topics.
Proceedings of the Conference held at Fairfield University and at the Graduate
Center, CUNY, November 1–2, 2018
ISBN 978-3-11-063673-4, e-ISBN 978-3-11-063838-7

www.degruyter.com

You might also like