Download as pptx, pdf, or txt
Download as pptx, pdf, or txt
You are on page 1of 64

Game Theory

Content
• Introduction
• Noncooperative Versus Cooperative Games
• Static Vs Dynamic Games
• Representation of Games
• Solution techniques
Introduction
• Game theory is concerned with how rational individuals make decision when they are mutually
interdependent.

• It attempts to mathematically capture behaviour in strategic situations, in which an individual's success


in making choices depends on the choices of others.

• While initially developed to analyze competitions in which one individual does better at another's
expense (zero sum games), it has been expanded to treat a wide class of interactions, which are classified
according to several criteria

• A game is any situation in which players (the participants) make strategic decisions—i.e., decisions that
take into account each other’s actions and responses.

• Examples of games include firms competing with each other by setting prices, or a group of consumers
bidding against each other at an auction for a work of art.
Noncooperative Vs. Cooperative Games
• In a cooperative game, players can negotiate binding contracts that allow them to plan joint strategies.

• In a noncooperative game, negotiation and enforcement of binding contracts are not possible.

• An example of a cooperative game is the bargaining between a buyer and a seller over the price of a rug. If the
rug costs $100 to produce and the buyer values the rug at $200, a cooperative solution to the game is possible:

• An agreement to sell the rug at any price between $101 and $199 will maximize the sum of the buyer’s consumer
surplus and the seller’s profit, while making both parties better off.

• An example of a noncooperative game is a situation in which two competing firms take each other’s likely
behaviour into account when independently setting their prices.

• Each firm knows that by undercutting its competitor, it can capture more market share. But it also knows that in doing
so, it risks setting off a price war.
Static Vs. dynamic Games
• In static games the players make their moves in isolation without knowing what other players
have done.

• This does not necessarily mean that all decisions are made at the same time, but rather only as if
the decisions were made at the same time.
• An example of a static game is a one-off sealed bid auction.
• In this type of auction each player submits only one bid, without knowing what any of the other players has
bid.
• The highest bid is then accepted as the purchase price.

• dynamic games have a sequence to the order of play and players observe some, if not all, of
one another's moves as the game progresses.
• An example of a dynamic game is a so-called English auction.
• Here players openly bid up the price of an object.
• The final and highest bid is accepted as the purchase price.
Representation of Game
• A game consists of:
• a set of players - are the individuals who make the relevant decisions. For
there to be interdependence we need to have at least two players in the
game

• a set of strategies for each player - complete description of how a player


could play a game

• the payoffs to each player for every possible list of strategy choices by
the players.
Representation of Game

• There are two different ways of representing the same game, either as a normal form
game or as an extensive form game.

• The normal form gives the minimum amount of information necessary to describe a
game.

• It lists the players, the strategies available to each player, and the resulting pay-offs to
each player.
Example: The Prisoners’ Dilemma in a
Normal Form Game
Prisoner 2

Confess Don’t Confess

Confess -6, -6 0, -9
Prisoner 1
Don’t Confess -9, 0 -1, -1
Representation of Game
• In extensive form games greater attention is placed on the timing of the decisions to be made,
as well as on the amount of information available to each player when a decision has to be
made.
• This type of game is represented not with a matrix but with a decision, or game, tree and
generally has the following four elements in common.

• Nodes -This is a position in the game where one of the players must make a decision. The first position,
called the initial node, is an open dot, all the rest are filled in. Each node is labelled so as to identify who is
making the decision.

• Branches -These represent the alternative choices that the person faces, and so correspond to available
actions.

• Vectors - These represent the pay-offs for each player, with the pay-o ffs listed in the order of players

• Information Sets - When two or more nodes are joined together by a dashed line this means that the player
Example: The Prisoners’ Dilemma in
Extensive Form Game
Exercise_1: Depict the following situation as both a normal form game and an
extensive form game:

• Africell and Qcell are thinking of launching 5G internet service in the


Gambia in June 2020. If both firms launch the service, then they will
each make a profit of GMD2.4million. If only one firm launches 5G
internet service, then it can act as a monopolist and will make a profit of
GMD6million. If either firm decides not to launch the 5G internet
service that firm makes a loss of GMD3million, due to costs already
incurred in developing the 5G internet service.
Solution Techniques for Solving Games
• A solution to a game is a prediction of what each player in that game will do.
• This may be a very precise prediction, where the solution gives one optimal strategy for each
player.
• When this occurs, the solution is said to be unique.
• Many different solution techniques have been proposed for di fferent types of games.
• For static games two broad solution techniques have been applied.
• The first set of solution techniques rely on the concept of dominance.
• Here the solution to a game is determined by attempting to rule out strategies that a rational
person would never play.
• Arguments based on dominance seek to answer the question 'What strategies would a rational player never
play?’
• The second set of solution techniques is based on the concept of equilibrium.
• In non-cooperative games an equilibrium occurs when none of the players, acting individually, has an
incentive to deviate from the predicted solution.
Strict Dominance
• A strategy is said to be strictly dominated if another strategy always gives
improved payoffs whatever the other players in the game do.
• This solution technique makes the seemingly reasonable assumption that a rational
player will never play a strictly dominated strategy.
• If players knowingly play a strictly dominated strategy, they cannot be
maximizing their expected pay-off, given their beliefs about what other players
will do.
• In this sense a player who plays a strictly dominated strategy is said to be
irrational.
• Applying the principle of strict dominance rules out this type of irrational
behaviour
Solve the previous product launch game described in Exercise_1 using the
principle of strict dominance.

Qcell

Launch 5G Stay Out

Launch 5G 2.4M, 2.4M 6M, 0


Africell
Stay Out 0, 6M -3M, -3M
Weak Dominance

• A strategy is said to be weakly dominated if another strategy makes the person


better off in some situations and leaves them indi fferent in all others.

• Again, it seems reasonable to assume that a rational player will not play a
weakly dominated strategy, as he or she could do at least as well, and possibly
even better, by playing the dominant strategy
Example
Iterated strict dominance
• Iterated strict dominance assumes that strict dominance can be applied
successively to different players in a game.

• For example, if one player rules out a particular strategy, because it is strictly
dominated by another, then it is assumed other players recognize this and that
they also believe the other player will not play this dominated strategy.

• This in turn may lead them to exclude dominated strategies, and so on. In this
way it may be possible to exclude all but one strategy for each player, and so
make a unique prediction for the game being analysed
Iterated strict dominance
Iterated weak dominance

• This is the same as iterated strict dominance except here it is weak dominance
that is applied successively to different players in the game.

• Again, it is possible that this technique can produce a unique solution to a


particular game.
• One problem with iterated weak dominance, which is not shared by iterated
strict dominance, is that the predicted solution can depend on the order in
which players' strategies are eliminated.
Iterated weak dominance
• If a game yields a unique solution by applying either strict, weak, or iterated
dominance, then that game is said to be dominance solvable.

• The main problem with all these solution techniques is that often they give
very imprecise predictions about a game.

• Consider the game shown in the figure below.

• In this game arguments based on dominance lead to the very imprecise


prediction that anything can happen!

• If a more specific solution to this type of game is needed, then a stronger


solution technique must be applied
Nash equilibrium
• Concept of Nash equilibrium is motivated by the question 'What properties must
an equilibrium have?’
• The answer to this question from John Nash (1951), based on much earlier work
by Cournot (1838), was that in equilibrium each player's chosen strategy is
optimal given that every other player chooses the equilibrium strategy.
• Finding the Nash equilibrium for any game involves two stages.
• First, we identify each player's optimal strategy in response to what the other players
might do. This involves working through each player in turn and determining their
optimal strategies and is done for every combination of strategies by the other
players.
• Second, a Nash equilibrium is identified when all players are playing their optimal
strategies simultaneously.
Nash equilibrium
• The above methodology only identifies pure-strategy Nash equilibria and not
mixed-strategy Nash equilibria.
• A pure-strategy equilibrium is where each player plays one specific strategy.
• A mixed-strategy equilibrium is where at least one player in the game
randomizes over some or all of their pure strategies.
• This means that players place a probability distribution over alternative
strategies.
• For example, players might decide to play each of two available pure strategies
with a probability of 0.5, and never play any other strategy.
• A pure strategy is therefore a restricted mixed strategy with a probability of one
given to the chosen strategy, and zero to all the others.
Pure-Strategy Nash Equilibrium
Pure-Strategy Nash Equilibrium
Player B
L R

U (3,9) (1,8)
Player A
D (0,0) (2,1)

Suppose that play is simultaneous.


We discovered that the game has two Nash
equilibria; (U,L) and (D,R).
Pure-Strategy Nash Equilibrium
Player B
L R

U (3,9) (1,8)
Player A
D (0,0) (2,1)

Player A’s has been thought of as choosing


to play either U or D, but no combination of
both; that is, as playing purely U or D.
U and D are Player A’s pure strategies.
Pure-Strategy Nash Equilibrium
Player B
L R

U (3,9) (1,8)
Player A
D (0,0) (2,1)

Similarly, L and R are Player B’s pure


strategies.
Pure-Strategy Nash Equilibrium
Player B
L R

U (3,9) (1,8)
Player A
D (0,0) (2,1)

Consequently, (U,L) and (D,R) are pure


strategy Nash equilibria. Must every game
have at least one pure strategy Nash
equilibrium?
Pure-Strategy Nash Equilibrium
Player B
L R
U (1,2) (0,4)
Player A
D (0,5) (3,2)

Here is a new game. Are there any pure


strategy Nash equilibria?
Pure-Strategy Nash Equilibrium
Player B
L R
U (1,2) (0,4)
Player A
D (0,5) (3,2)

Is (U,L) a Nash equilibrium?


Pure-Strategy Nash Equilibrium
Player B
L R
U (1,2) (0,4)
Player A
D (0,5) (3,2)

Is (U,L) a Nash equilibrium? No.


Is (U,R) a Nash equilibrium?
Pure-Strategy Nash Equilibrium
Player B
L R
U (1,2) (0,4)
Player A
D (0,5) (3,2)

Is (U,L) a Nash equilibrium? No.


Is (U,R) a Nash equilibrium? No.
Is (D,L) a Nash equilibrium?
Pure-Strategy Nash Equilibrium
Player B
L R
U (1,2) (0,4)
Player A
D (0,5) (3,2)

Is (U,L) a Nash equilibrium? No.


Is (U,R) a Nash equilibrium? No.
Is (D,L) a Nash equilibrium? No.
Is (D,R) a Nash equilibrium?
Pure-Strategy Nash Equilibrium
Player B
L R
U (1,2) (0,4)
Player A
D (0,5) (3,2)

Is (U,L) a Nash equilibrium? No.


Is (U,R) a Nash equilibrium? No.
Is (D,L) a Nash equilibrium? No.
Is (D,R) a Nash equilibrium? No.
Pure-Strategy Nash Equilibrium
Player B
L R
U (1,2) (0,4)
Player A
D (0,5) (3,2)

So the game has no Nash equilibria in pure


strategies. Even so, the game does have a
Nash equilibrium, but in mixed strategies.
Mixed Strategy Nash Equilibrium

• Instead of playing purely Up or Down, Player A selects a probability


distribution (pU,1-pU), meaning that with probability pU Player A will
play Up and with probability 1-pU will play Down.

• Player A is mixing over the pure strategies Up and Down.

• The probability distribution (pU,1-pU) is a mixed strategy for Player A.


Mixed Strategy Nash Equilibrium
• Similarly, Player B selects a probability distribution (pL,1-pL), meaning
that with probability pL Player B will play Left and with probability 1-pL
will play Right.

• Player B is mixing over the pure strategies Left and Right.

• The probability distribution (pL,1-pL) is a mixed strategy for Player B.


Mixed Strategy Nash Equilibrium
Player B
L R
U (1,2) (0,4)
Player A
D (0,5) (3,2)

This game has no pure strategy Nash


equilibria but it does have a Nash
equilibrium in mixed strategies. How is it
computed?
Mixed Strategy Nash Equilibrium
Player B
L,pL R,1-pL

U,pU (1,2) (0,4)


Player A
D,1-pU (0,5) (3,2)
Mixed Strategy Nash Equilibrium
Player B
L,pL R,1-pL

U,pU (1,2) (0,4)


Player A
D,1-pU (0,5) (3,2)

If B plays Left her expected payoff is


2 U  5 ( 1   U )
Mixed Strategy Nash Equilibrium
Player B
L,pL R,1-pL

U,pU (1,2) (0,4)


Player A
D,1-pU (0,5) (3,2)

If B plays Left her expected payoff is


2 U  5( 1   U ).
If B plays Right her expected payoff is
4  U  2( 1   U ).
Mixed Strategy Nash Equilibrium
Player B
L,pL R,1-pL

U,pU (1,2) (0,4)


Player A
D,1-pU (0,5) (3,2)

If 2 U  5( 1   U )  4  U  2( 1   U ) then
B would play only Left. But there are no
Nash equilibria in which B plays only Left.
Mixed Strategy Nash Equilibrium
Player B
L,pL R,1-pL

U,pU (1,2) (0,4)


Player A
D,1-pU (0,5) (3,2)

If 2 U  5( 1   U )  4  U  2( 1   U ) then
B would play only Right. But there are no
Nash equilibria in which B plays only Right.
Mixed Strategy Nash Equilibrium
Player B
L,pL R,1-pL

U,pU (1,2) (0,4)


Player A
D,1-pU (0,5) (3,2)

So for there to exist a Nash equilibrium, B


must be indifferent between playing Left or
Right; i.e. 2 U  5( 1   U )  4  U  2( 1   U )
Mixed Strategy Nash Equilibrium
Player B
L,pL R,1-pL

U,pU (1,2) (0,4)


Player A
D,1-pU (0,5) (3,2)

So for there to exist a Nash equilibrium, B


must be indifferent between playing Left or
Right; i.e. 2 U  5( 1   U )  4  U  2( 1   U )
  U  3 / 5.
Mixed Strategy Nash Equilibrium
Player B
L,pL R,1-pL
3
U, (1,2) (0,4)
5
Player A 2
D, (0,5) (3,2)
5
So for there to exist a Nash equilibrium, B
must be indifferent between playing Left or
Right; i.e. 2 U  5( 1   U )  4  U  2( 1   U )
  U  3 / 5.
Mixed Strategy Nash Equilibrium
Player B
L,pL R,1-pL
3
U, (1,2) (0,4)
5
Player A 2
D, (0,5) (3,2)
5
Mixed Strategy Nash Equilibrium
Player B
L,pL R,1-pL
3
U, (1,2) (0,4)
5
Player A 2
D, (0,5) (3,2)
5
If A plays Up his expected payoff is
1   L  0  (1   L )   L .
Mixed Strategy Nash Equilibrium
Player B
L,pL R,1-pL
3
U, (1,2) (0,4)
5
Player A 2
D, (0,5) (3,2)
5
If A plays Up his expected payoff is
1   L  0  (1   L )   L .
If A plays Down his expected payoff is
0   L  3  (1   L )  3(1   L ).
Mixed Strategy Nash Equilibrium
Player B
L,pL R,1-pL
3
U, (1,2) (0,4)
5
Player A 2
D, (0,5) (3,2)
5
If  L  3( 1   L ) then A would play only Up.
But there are no Nash equilibria in which A
plays only Up.
Mixed Strategy Nash Equilibrium
Player B
L,pL R,1-pL
3
U, (1,2) (0,4)
5
Player A 2
D, (0,5) (3,2)
5
If  L  3( 1   L ) then A would play only
Down. But there are no Nash equilibria in
which A plays only Down.
Mixed Strategy Nash Equilibrium
Player B
L,pL R,1-pL
3
U, (1,2) (0,4)
5
Player A 2
D, (0,5) (3,2)
5
So for there to exist a Nash equilibrium, A
must be indifferent between playing Up or
Down; i.e.  L  3( 1   L )
Mixed Strategy Nash Equilibrium
Player B
L,pL R,1-pL
3
U, (1,2) (0,4)
5
Player A 2
D, (0,5) (3,2)
5
So for there to exist a Nash equilibrium, A
must be indifferent between playing Up or
Down; i.e.  L  3( 1   L )   L  3 / 4.
Mixed Strategy Nash Equilibrium
Player B
3 1
L, 4 R, 4
3
U, (1,2) (0,4)
5
Player A 2
D, (0,5) (3,2)
5
So for there to exist a Nash equilibrium, A
must be indifferent between playing Up or
Down; i.e.  L  3( 1   L )   L  3 / 4.
Mixed Strategy Nash Equilibrium
Player B
3 1
L, 4 R, 4
3
U, (1,2) (0,4)
5
Player A 2
D, (0,5) (3,2)
5

So the game’s only Nash equilibrium has A


playing the mixed strategy (3/5, 2/5) and has
B playing the mixed strategy (3/4, 1/4).
Mixed Strategy Nash Equilibrium
Player B
3 1
L, 4 R, 4
3 (1,2)
U, (0,4)
5 9/20
Player A 2
D, (0,5) (3,2)
5

The payoffs will be (1,2) with probability


3 3 9
 
5 4 20
Mixed Strategy Nash Equilibrium
Player B
3 1
L, 4 R, 4
3 (1,2) (0,4)
U,
5 9/20 3/20
Player A 2
D, (0,5) (3,2)
5

The payoffs will be (0,4) with probability


3 1 3
 
5 4 20
Mixed Strategy Nash Equilibrium
Player B
3 1
L, 4 R, 4
3 (1,2) (0,4)
U,
5 9/20 3/20
Player A 2
D, (0,5) (3,2)
5 6/20
The payoffs will be (0,5) with probability
2 3 6
 
5 4 20
Mixed Strategy Nash Equilibrium
Player B
3 1
L, 4 R, 4
3 (1,2) (0,4)
U,
5 9/20 3/20
Player A 2
D, (0,5) (3,2)
5 6/20 2/20
The payoffs will be (3,2) with probability
2 1 2
 
5 4 20
Mixed Strategy Nash Equilibrium
Player B
3 1
L, 4 R, 4
3 (1,2) (0,4)
U,
5 9/20 3/20
Player A 2
D, (0,5) (3,2)
5 6/20 2/20
Mixed Strategy Nash Equilibrium
Player B
3 1
L, 4 R, 4
3 (1,2) (0,4)
U,
5 9/20 3/20
Player A 2
D, (0,5) (3,2)
5 6/20 2/20
A’s expected Nash equilibrium payoff is
9 3 6 2 3
1  0  0  3  .
20 20 20 20 4
Mixed Strategy Nash Equilibrium
Player B
3 1
L, 4 R, 4
3 (1,2) (0,4)
U,
5 9/20 3/20
Player A 2
D, (0,5) (3,2)
5 6/20 2/20
A’s expected Nash equilibrium payoff is
9 3 6 2 3
1  0  0  3  .
20 20 20 20 4
B’s expected Nash equilibrium payoff is
9 3 6 2 16
2  4  5  2  .
20 20 20 20 5

You might also like