Download as pdf or txt
Download as pdf or txt
You are on page 1of 15

4701 Project Report - Chinese Chess

Jiahao Zhang (jz522), Yiting Wang (yw428)

PROJECT OVERVIEW
Chinese Chess
General Description

Figure 1: Chinese Chess Board Starting Positions

Chinese chess is a Chess-like strategy board game for 2 players. The Chinese Chess is played
on a 10x9 board with 64 squares. A river separates the board into two opposing sides, the red
side and the black side. There are also diagonal lines which specifies the paths certain pieces
have to follow and the boundaries for certain pieces. Pieces can only be placed on line
intersections, but they do not necessarily have to follow the line to move.
The objective of the game is to capture the "General(將/帥)" of the other player. Like chess,
different pieces have different move restrictions. For example, the Cannon(砲/炮) can move
horizontally or vertically, but must jump another piece to capture. The middle area of the board,
called river, also restricts the movements of some pieces on one side. Figure 1 shows the board
with all pieces in their starting position. All pieces can move within their restrictions. Each
movement may result in the capture of the other side’s piece if capturing and moving restrictions
are obeyed.

Pieces
This is a detailed description of each individual piece and its move restrictions. The name of
each restriction will be specified as:
Translation(ChineseCharacterForRed/ChineseCharacterForBlack/Romanization)
Romanization is the rough phonetic mapping from Chinese characters to the Roman Script.
Even though the Chinese characters are different for the same piece for both sides (and
sometimes pronounced differently) , but they are essentially the same piece and only one
romanization is used by convention. In our Chinese Chess AI, romanization of the characters is
used for convenience.

General(將/帥/Jiang):
This is the most important piece in Chinese Chess.
Players win by capturing the General from the other side. The General can move forward,
backward, left, or right with only 1 step at each turn. It can only move within the 3 by 3 square
as specified by the diagonal lines from its side. An important configuration of the Generals is
when the Generals from both sides are on the same line without any other pieces in between
them. The side that initiates this configuration loses. In other words, the Generals cannot meet
each other.

Guards(士/仕/Shi):
The Guards can only move along the diagonal lines form its side with 1 step at each turn. It can
only move within the 3 by 3 square as specified by those diagonal lines.

Elephants(象/相/Xiang):
The Elephants can only move along diagonals of 2x2 grids on the board. It cannot move across
the river. As a result, each Elephant effectively has 7 possible positions on the board.
Additionally, Elephants cannot move or capture across a 2x2 grid with a piece at the center of
that grid. This restriction is called “拐脚/蹩脚”, or “blocking feet.”

Chariots(車/俥/Che):
This is the most powerful piece in Chinese Chess.
The Chariots can move forward, backward, left, or right with any steps as long as there are no
other pieces in its way. It captures in a similar fashion.
Horses(馬/傌/Ma):
The Horses can only move along any 1x2 or 2x1 grids on the board. It is also restricted by “拐脚
/蹩脚”, or “blocking feet” as shown in Figure 2. In words, if there is any blocking piece to the
Horses’ left/right/forward/backward adjacent position (not diagonal adjacent positions), then the
Horses cannot move to the position along the diagonal of any 1x1 grids which the blocking
piece belongs to.

Figure 2: Horses blocking feet

Cannons(砲/炮/Pao):
The Cannons can move forward, backward, left, or right with any steps as long as there are no
other pieces in its way. However, much like how cannons work in real battles, it must capture
pieces by jumping over exactly one other piece, which can be from either side.

Soldiers(卒/兵/Bing):
Before Soldiers cross the river, it cannot move backward, left or right, and it can only move
forward 1 step at each turn.
After Soldiers cross the river, it cannot move backward, and it can move left, right or forward 1
step at each turn.
Chess Implementation

Figure 3: Command line Chinese Chess


We successfully implemented a Chinese Chess game that has 5 game modes, which can be
selected upon starting the game.

Person Mode. The standard Chinese Chess game where two people can play.
AI Mode. A human player can play against the AI.
Double AI Mode. Watch two AI players play against each other.
Genetic Piece Strength Mode. The genetic algorithm for the strength of pieces.
Genetic Position Value Mode. The genetic algorithm for the position value of pieces.

This Chinese Chess game is entirely command-line based as we did not write a UI for the
game. However, we have different letters and numbers to represent the chess pieces, which are
printed alongside the ids for each piece so the player can control the chess pieces with ids. The
input into the console is in the format of "<id> <direction> <steps>" which specifies the piece to
move, the direction to move towards, and how many steps to move by. Here are lists of allowed
command for each piece (with move and capture restrictions implemented):

*F: Forward, B: Backward, L: Left, R: Right. Their combinations are also used for diagonal
movements. For square diagonals (e.g. Guards), swapping letters results in the same
movement. For non-square diagonals (e.g. Horses), swapping letters results in different
movements.

General(將/帥/Jiang/J/j):
<id> F <optional_step>
<id> B <optional_step>
<id> L <optional_step>
<id> R <optional_step>
Guards(士/仕/Shi/S/s):
<id> FL <optional_step> or <id> LF <optional_step>
<id> BL <optional_step> or <id> LB <optional_step>
<id> FR <optional_step> or <id> RF <optional_step>
<id> BR <optional_step> or <id> RB <optional_step>

Elephants(象/相/Xiang):
<id> FL <optional_step> or <id> LF <optional_step>
<id> BL <optional_step> or <id> LB <optional_step>
<id> FR <optional_step> or <id> RF <optional_step>
<id> BR <optional_step> or <id> RB <optional_step>

Chariots(車/俥/Che):
<id> F <step>
<id> B <step>
<id> L <step>
<id> R <step>

Horses(馬/傌/Ma):
<id> FL <optional_step>
<id> LF <optional_step>
<id> BL <optional_step>
<id> LB <optional_step>
<id> FR <optional_step>
<id> RF <optional_step>
<id> BR <optional_step>
<id> RB <optional_step>

Cannons(砲/炮/Pao):
<id> F <step>
<id> B <step>
<id> L <step>
<id> R <step>

Soldiers(卒/兵/Bing):
Before crossing river:
<id> F <optional_step>
After crossing river:
<id> F <optional_step>
<id> L<optional_step>
<id> R <optional_step>
AI IMPLEMENTATION

Minimax Search with Alpha-beta Pruning


We implemented a depth-limited minimax search algorithm with alpha-beta pruning.

Depth of the Search Tree


Since there are as many as 184 possible moves in Chinese Chess at one point, the game tree
gets very large very quickly. Through our play testing during development, we settled on the
depth of 3 because it is sophisticated enough to make a human player feel pressured, while not
taking too long to determine a move.

Heuristic Function for Game States


The heuristic function we implemented is one that is based on the strength of the piece and the
position of the piece. This is because in Chinese Chess, not only do the pieces have different
strengths, the positions of the pieces is also a very important metric, as some positions limits the
power of certain pieces, while others enhance them. For example, a chariot is more powerful
when it is in open spaces than in corners. The heuristic function computes the weighted sum of
all the pieces on the board. The weight of each piece is assigned based on the strength and the
position of the piece.

The value of a state s for the side black is defined as:

∑ (𝑠𝑡𝑟𝑒𝑛𝑔𝑡ℎ(𝑝) * 𝑝𝑜𝑠𝑖𝑡𝑖𝑜𝑛(𝑝)) − ∑ (𝑠𝑡𝑟𝑒𝑛𝑔𝑡ℎ(𝑞) * 𝑝𝑜𝑠𝑖𝑡𝑖𝑜𝑛(𝑞))


𝑝∈𝑏𝑙𝑎𝑐𝑘 𝑞∈𝑟𝑒𝑑

Piece Strength
The strength of the piece is defined as how strong is the piece in general. We defined this
initially as the following table, based on our experience with Chinese Chess.

General Guard Elephant Horse Chariot Cannon Pawn

20000 300 400 600 600 600 200

Position Values
The position values of the piece is defined based on how powerful the piece is at certain
positions. For the initial set of position values, we found this paper, Computer Chinese Chess[1]
by Chen et al. really helpful since they define a set of position values for some of the pieces.
For example, this is the position values they defined for the Bing piece:
[[ 0, 3, 6, 9, 12, 9, 6, 3, 0],
[18, 36, 56, 80,120, 80, 56, 36, 18],
[14, 26, 42, 60, 80, 60, 42, 26, 14],
[10, 20, 30, 34, 40, 34, 30, 20, 10],
[ 6, 12, 18, 18, 20, 18, 18, 12, 6],
[ 2, 0, 8, 0, 8, 0, 8, 0, 2],
[ 0, 0, -2, 0, 4, 0, -2, 0, 0],
[ 0, 0, 0, 0, 0, 0, 0, 0, 0],
[ 0, 0, 0, 0, 0, 0, 0, 0, 0],
[ 0, 0, 0, 0, 0, 0, 0, 0, 0]]
This matrix of position values of Pawns shows how powerful Pawns can be after it crosses the
river. Pawns can only move forward before crossing the river. But can also move left and right
after crossing the river. Just as the Chinese proverb goes “Pawns that cross the river is half a
Chariot.”

There are also pieces that Chen et al. does not have the position values for, such as General,
Elephant, and Guard. For these pieces, we defined the position values ourselves based on our
experience. For example, we defined the position values for Guard to be, only giving values to
the valid positions that Shi can possibly be in:
[[ 0, 0, 0, 0, 0, 0, 0, 0, 0],
[ 0, 0, 0, 0, 0, 0, 0, 0, 0],
[ 0, 0, 0, 0, 0, 0, 0, 0, 0],
[ 0, 0, 0, 0, 0, 0, 0, 0, 0],
[ 0, 0, 0, 0, 0, 0, 0, 0, 0],
[ 0, 0, 0, 0, 0, 0, 0, 0, 0],
[ 0, 0, 0, 0, 0, 0, 0, 0, 0],
[ 0, 0, 0, 1, 0, 1, 0, 0, 0],
[ 0, 0, 0, 0, 2, 0, 0, 0, 0],
[ 0, 0, 0, 3, 0, 3, 0, 0, 0]]
Genetic Algorithm
Once we have the initial piece strength and position values as defined above, we further tried to
improve the AI by implementing the Genetic Algorithm. In normal genetic algorithm, there need
to be two things, a genetic representation of the states, and a fitness function to evaluate the
states. We created two versions of the genetic algorithm, one for the strengths of the pieces,
and the other for position values of a piece.

Genetic Representation of the States


In the case of improving the strengths of the pieces, the genetic representation is defined as a
7-tuple with each value corresponding to the strengths of (General, Guard, Elephant, Horse,
Chariot, Cannon, Pawn). During minimax search, the strengths of each piece would be defined
as according to the values specified in this tuple when evaluating the heuristic function for the
state.

In the case of improving the position values of the pieces, the genetic representation is defined
as a 90-tuple for the 90 possible positions on the board. In each iteration of the algorithm, the
90-tuple will be transformed into a 10x9, 2-dimensional array of values that corresponds to the
positions on the board, so that the minimax search algorithm can take the new position values
into account in the heuristic function. However, this can only improve the position values of one
piece at a time, which means to improve all pieces, we needed to run the genetic algorithm on
all pieces.

Fitness Function
Since we are trying to improve the weights of the pieces so that the minimax algorithm can
perform better, we decided to define a genetic state to be a better fit if the AI using this genetic
state can beat another AI using another genetic state. This is a very rough estimate of the
fitness of states, but our idea is that we would let the two AI play against each other, and choose
the better of the genetic state to place in our "genetic pool".

Implementation
The genetic algorithm starts off with two genetic states that we manually define, one for each
competing AI, and the "genetic pool" of good states is initially empty. For each iteration, we
would let the two AI play against each other, and place the genetic states of the winning AI into
the "genetic pool". At the end of the iteration, we update the genetic state for the losing AI by:
● Check whether the genetic pool has more than two genetic states in there, and if so,
randomly choose two from the pool and do a crossover. That is, create a new tuple such
that each element in the tuple is equally likely to come from the corresponding element
in one of the two parent tuples.
● With probability 0.1, perform a mutation operation on the genetic state. That is, for every
element in the original tuple, it has the probability of 0.1 of being changed to a value that
is greater or smaller than the original value within a range of 600 and step size of 100.
The iterations are repeated a number of times (e.g. 10), and at the end of the iterations, we
would take the final value as the value to use in the AI-Human game.

Results
We only ran the genetic algorithm for a few times because it was implemented towards the end
of the semester, and the algorithm takes a considerable amount of time to run completely.

Strength Improvements. While our initial definition of the strength values were (20000, 300,
400, 600, 600, 600, 200), after running the generic algorithm, here is the new set of strength
values we got (18700, 400, 500, 600, 800, 600, 300), which is not too different from what we
originally thought was the strength value of the pieces.

Position Value Improvements. The genetic algorithm also helped us refine better position
values. When running the genetic algorithm for the position values of the Cannon, for example,
the original position values we had were:
[[ 6, 4, 0,-10,-12,-10, 0, 4, 6],
[ 2, 2, 0, -4,-14, -4, 0, 2, 2],
[ 2, 2, 0,-10, -8,-10, 0, 2, 2],
[ 0, 0, -2, 4, 10, 4, -2, 0, 0],
[ 0, 0, 0, 2, 8, 2, 0, 0, 0],
[-2, 0, 4, 2, 6, 2, 4, 0, -2],
[ 0, 0, 0, 2, 4, 2, 0, 0, 0],
[ 4, 0, 8, 6, 10, 6, 8, 0, 4],
[ 0, 2, 4, 6, 6, 6, 4, 2, 0],
[ 0, 0, 2, 6, 6, 6, 2, 0, 0]]
Due to the time it takes to run one iteration with depth 3 of the minimax algorithm, we were only
able to run this for a few iterations. But after running these iterations, we realized that the
offsprings that had more positive position values almost always won against the AI that had
negative position values. Thus we refined the position values so that it does not have negative
values anymore, implemented by adding 14 to every position value above.

Overall, without the constraint of computational power and time, we would run genetic algorithm
over the strengths and position values of all pieces over many iterations to find the “fitness”
values. For now, given the constraint, we were only able to tweak around the strength and
position values of some pieces with the result of genetic algorithm. Especially that the
computation increases exponentially with the depth minimax, genetic algorithm’s effectively is
compromised given the current constraint. It can only be used as a qualitative optimization
method.
EVALUATION
Computational Cost Analysis

● Computation time
In the beginning of the game (all pieces are present), each step takes
Depth = 1: 0.17014498710632324s
Depth = 2: 2.1549530029296875s
Depth = 3: 29.15933799743652s
Depth = 4: 375.46785902977s
On average (During the entire game), each step takes
Depth = 1: 0.11983465839383388s
Depth = 2: 1.6293756655489399s
Depth = 3: 27.91848578882283s
Depth = 4: 373.726434257757s

● Max Depth for 10-min turn


The depth of minimax algorithm to run for more than 10 min at each step is 5

Chance of Winning Analysis


● We had our AI play with itself, and each move is observed. The percentage of obviously
rookie/Incorrect moves of one of the AI are recorded by developers for a game play at
different depths. We code rookie moves as the following categories:

1. Move without protecting an important piece under potential capture


2. Move an important piece such that it results in an immediate capture of this piece
3. Move a piece such that it results in an immediate capture of an important piece
4. Move a piece meaninglessly, such as moving the general back and forth
5. Move a piece, falling under an obvious trap set up by the opponent
6. Move away from a definite capture of the opponent’s important piece
7. Move away from a definite win
8. Move into a definite loss

Since the judgement here is subjective, and the different categories are not mutually
exclusive, it is only a qualitative measure of the AI quality at different depths. At depth =
1 it took 30 turns for red to win. At depth = 2 it took 50 turns for black to win. At depth = 3
it took 25 turns for red to win.
Rookie Move Type 1 2 3 4 5 6 7 8

Depth = 1 /% 23.3 23.3 23.3 3.3 6.7 1.0 13.3 3.3

Depth = 2 /% 16.0 20.0 18.0 4.0 4.0 0.0 6.0 0.0

Depth = 3 /% 12.0 16.0 16.0 4.0 0.0 4.0 6.0 3.3

We also recorded the percentage of good moves by one of the AI (same AI as in rookie
move observation) for a game play between AI and AI at different depths. We code good
moves as the following categories:
1. Move a piece following simple traditional routines
2. Moves that set up traps following traditional routines
3. Moves that avoid or resolve traps following traditional routines
4. Moves that capture the general following non-trivial routines
Again, since the judgement here is subjective, and the different categories are not
mutually exclusive, it is only a qualitative measure of the AI quality at different depths.

Good Move Type 1 2 3 4

Depth = 1 /% 6.7 0.0 0.0 0.0

Depth = 2 /% 18.0 2.0 2.0 8.0

Depth = 3 /% 34.0 4.0 0.0 0.0

It can be seen from the above table the performance of the AI improves as depth
increases. Depth 4 was not tested since it took too long for each step.

Here are also some additional observations on the AI during the AI vs AI playtest.
- The AI at any depth plays well in the beginning, but as it approaches the end,
instead of making immediate killing moves, the winning side chooses to capture
the general with very complicated moves such as moving all pawns across the
river and corner the general and then kill it.
- The AI at shallower depth does not use pawns as much as greater depth. This
might be due to the fact the pawns need to make more than 2 moves to realize
their value. AI at shallow depth does not see the future power of pawns.

● Play Test by invitation.


Each participant is asked to play with our AI at depth = 3. Since most of the participants
are very busy Cornell students, we only evaluated their Chinese Chess background with
some simple questions. They include the continuous number of years of Chinese Chess
playing experience and an out of 10 self rating of Chinese Chess skill with 1 being the
worst. The outcome for each game is recorded. A simple version of evaluation based on
Rookie/Good move was asked from each participant. Each participants are asked to rate
ranging from 1 to 5 each of the categories of Rookie/Good moves as classified above.
Here are the results:
- Wenhui Feng: < 1 year experience, 1/10 self rating
- Didn’t finish the game
- No Comment

Rookie Move Rating Good Move Rating

Rating 2 3 3 2 1 1 1 NA 3 2 1 NA

- Shiyu Wang: 2 years experience, 4/10 self rating


- AI lost
- Comment: The AI was trying very hard. It gets confused when it is trouble
though. You should handle imminent capture of its own general better. I
almost feel like it didn’t try very hard at all. But it was not trivial apparently.

Rookie Move Rating Good Move Rating

Rating 4 3 1 2 3 1 1 5 2 1 1 1

- Le Yuan: < 1 year experience, 1/10 self rating


- AI won
- Comment: The AI could have beat me much sooner. I don’t understand
why it moves all the pawns over. It must love torturing its opponent.

Rookie Move Rating Good Move Rating

Rating 3 3 3 2 1 1 1 0 4 3 3 1

- Tongtong Lian: 3 year experience, 4/10 self rating


- AI lost
- This AI does surprises me a lot. It definitely does not see too far thought. I
could set up traps and it falls into it easily. Of course it does not set up
any trap either. But it was a fun game though.

Rookie Move Rating Good Move Rating

Rating 3 3 3 4 3 1 1 5 2 1 1 1
- Yang Guo: 2 year experience, 3/10 self rating
- Didn’t finish the game
- Comment: The AI is legit. It does replicate some traditional moves, some
of which even surprised me.

Rookie Move Rating Good Move Rating

Rating 4 2 2 3 2 1 1 NA 3 2 1 NA

- Martin Wang: < 1 year experience, 2/10 self rating


- AI won
- No Comment.

Rookie Move Rating Good Move Rating

Rating 2 3 2 2 1 1 1 0 2 3 3 5

- Chuhan Liu: < 1 year experience, 1/10 self rating


- AI won
- Comment: The AI works pretty well. It's just so slow to think about its
moves, but I guess you have to trade off time versus good moves.

Rookie Move Rating Good Move Rating

Rating 3 1 3 2 2 1 4 2 4 1 3 1

- Xin Lin: < 1 year experience, 2/10 self rating


- Didn’t finish the game
- Comment: I was very surprised that it was able to perform some well
known good moves, such as "当头炮,马来跳 (Head-on Cannon, Horse
takes care)". Sometimes it does something that I have to think for a while
and then be like "Oh that is a good move".

Rookie Move Rating Good Move Rating

Rating 2 1 3 3 1 1 3 NA 3 1 2 NA

- Guangze Xu: 3 years experience, 5/10 self rating


- AI lost
- Good game. The AI could easily beat me when I was a beginner. But it
does make obviously stupid moves. Furthermore, why bother moving
pawns so often?

Rookie Move Rating Good Move Rating

Rating 3 2 3 2 3 1 1 5 2 3 1 0

- Haoyun Xu: 2 years experience, 2/10 self rating


- AI lost
- Maybe it’s just me but this AI does not feel like a person. It does make
good moves sometimes, but some of them are not very conventional. I did
feel pressured at times.

Rookie Move Rating Good Move Rating

Rating 3 2 1 4 4 2 1 5 2 1 2 0

It’s worth noting that the ratings are again very qualitative given the time constraint. A
more quantitative evaluation would be possible if there are more time. It is very hard to
give a definite score

● Having our AI (Depth = 3) play with some online Chinese Chess AI. The result was
recorded:
4399 Games:
http://www.4399.com/flash/36944_3.htm
Result: Our AI lost by Double Cannon trap

Siyuetian Chinese Chess Website:


http://www.siyuetian.net/javaxq/chessdy.html
Result: Our AI lost by Double Cannon trap

17yy Games:
http://www.17yy.com/f/play/61219.html
Result: Our AI lost by Double Cannon trap

Double Cannon trap is a traditional trap that takes more than 3 layers to see through. It
makes sense that our Ai lost to the trap because the depth of our minimax tree is limited.
For future improvement, we can hard code Chinese Trap database into the minimax tree
of the AI. This will enable our AI to save computation power on some traditional traps.
REFERENCES
[1] Chen, S. J. Y. J. C., Yang, T. N., & Hsu, S. C. (2004). Computer chinese chess. ICGA Journal, 27(1),
3-18.
[2] Shannon, C. E. (1988). Programming a computer for playing chess. In Computer chess compendium
(pp. 2-13). Springer New York.
[3] Xiangqi.(2017, January 26).Retrieved March 24,2017,from https://en.wikipedia.org/wiki/Xiangqi#Rules

You might also like