Download as pdf or txt
Download as pdf or txt
You are on page 1of 7

HW1_Q1_Bhargavi

Sunday, January 29, 2023 1:54 AM

Q1. Decision problem with a continuous action space


Tadelis 1.7: Mayor's decision on spending money on the city's parks and recreation

Given: Yearly Budget of the city is $20,000,000


Amount of money the Mayor decides to spend, c ϵ C

Money equivalent benfit from spending $c (Payoff function) v (c)

a) Action set for the city's mayor is the amount they could spend, it is a continuous action space

as per the problem 5% of the yearly budget is the maximum limit on the spending for parks and recreation i.e., 5% of $20,000,000

Money spending limitation C = [0$, 1,000,000$] __Action set for the city's mayor

b) How much money the mayor should spend:


Maximum payoff
at c=640,000$ Budget limit
It is best to choose the amount of money that could give maximum benefits,
1,000,000
So We choose the amount of money where v (c) is maximum

v (c)

= 800

C = 640,000$

So the mayor has to spend 640,000$ so that the city can get maximum benefits out of the money spent

c) The new preferences after shift in public opinion v (c)


Since no shift in yearly budget or the 5% city code restrict on spending budget
The action set remains same, C = [0$, 1,000,000$]
But the payoff function changed so will be the amount money that need to be spent to reap maximum benefits
Maximum payoff
As we have done in the b section of the problem, we need to find the point where payoff function is maximum Budget limit at c=2,560,000$
1,000,000
v (c)

= 1600

C = 2,560,000$

But this amount exceeds the budget limit, so the best approach under limitation is use the total budget available.
So the mayor has to spend the total available budget which is 1,000,000$ so that the city can get maximum benefits out of the money spent

HW1__Q1_Bhargavi Page 1
HW1_Q2_Bhargavi
Sunday, January 29, 2023 2:50 PM

Q2 Decision-making under risk


Tadelis P 2.6 Maximum cost of the impact study that is worthwhile for the developer to have it conducted

Given: The developer with lots of experience knows the score will be in the range [40,70]

And as per the previous data, if the score is in the range [40, 50] there is 35% chance the project will be approved
And if the score is in range [50, 55] there is 5% chance the projects gets approved
and if score is greater than 55 than the project gets rejected

The probability the score will be in range [40,50] = 0.3333 , ((50-40)/(70-40) = 0.3333 )
The probability the score will be in range [50,55] =0.1667 , ((55-50)/(70-40) = 0.1667 )
The probability the score will be in range [55,70] =0.5 , ((55-70)/(70-40) = 0.5 )
Expected Score range Probability of Project
and its probability approval
The value of the development is $20,000,000
[40,50] 0.35
Environmental probability of the Probability of the project approval 0.3333
Impact score expected score range if the score lies in this range
Environmental
[40,50] 0.3333 0.35 [50,55] 0.05
impact score
[50,55] 0.1667 0.05 0.1667
[55,70] 0.5 0

The probability that the project will be approved P(a) = 0.3333*0.35+0.1667*0.05+0.5*0=0.125 [55,70] 0
0.5
The expected value of doing the study is E(v) = 0.125*$20,000,000=$2,500,000

HW1_Q2_Bhargavi Page 1
HW1_Q3_Bhargavi
Sunday, January 29, 2023 5:00 PM

Q3 Decision over time


Finding two game strategy that will maximize Alice's probability of winning the match
Given:
If played timid : either draw or a loss
Probability of draw P(d) = 0.9
Probability of lose if played timid p(l_t)=1-p(d)= 0.1

If played bold: either win or loss


Probability of win P(w) = 0.45
Probability of lose if played bold P(l_b) = 1-P(w) = 0.55

For win it is 1 point and 1/2 for a draw, and zero for a loss
Game 1 probability Game 2 Probability Points
0.45 win 1
Bold
0.45 Win 0.55 lose 0
1 0.9 draw 0.5
Timid
Bold 0.1 lose 0
0.45 win 1
Bold
0.55 lose 0
0.55 lose 0.9 draw 0.5
0 Timid
0.1 lose 0
0.45 win 1
Bold
0.9 Draw 0.55 lose 0
0.5 0.9 draw 0.5
Timid
0.1 lose 0
Timid 0.45 win 1
Bold
0.55 lose 0
0.1 lose 0.9 draw 0.5
0 Timid
0.1 lose 0

Here Backward induction cannot be of any help as the expected utility for playing bold and timid is same E(u) = 0.45 for game 2
And it is given that if the score is tied at the end of two games, the player who wins the first game is declared as the winner
So Alice cannot lose the first game as it reduces the chance of winning the overall game to zero.

Game 1 probability Game 2 E(u)

= 0.45*1+0.55*0
Bold 0.45
0.45 Win
1
Timid 0.45 =0.9*0.5+0.1*0
Bold

Bold 0.45 = 0.45*1+0.55*0

0.55 lose
0 Timid 0.45
=0.9*0.5+0.1*0

Bold 0.45 = 0.45*1+0.55*0


0.9 Draw
0.5
Timid 0.45 =0.9*0.5+0.1*0

Timid
= 0.45*1+0.55*0
Bold 0.45

0.1 lose
0 Timid 0.45 =0.9*0.5+0.1*0

HW1_Q3_Bhargavi Page 1
Game 1 Game 2 Overall probabilities
So first we need to form a strategy that minimizes the probability of losing W W 0.45*0.45=0.2025
game 1 to maximize winning the overall match. W D 0.45*0.9=0.405
Maximum probability of winning the match but is not an
W L 0.45*0.1=0.045 idle strategy as it has high probability of losing first game
If Alice plays the first game timidly, the probability of losing the game is 0.1, If played timidly
and the probability of drawing the game is 0.9 0.45*0.55=0.2475
If played boldly
D W 0.9*0.45=0.405 Idle strategy and maximum probability of winning the
If Alice plays the first game boldly, the probability of losing the game is 0.55,
match
and the probability of winning the game is 0.45 D D 0.9*0.9=0.81 -
this strategy does
not give a win but
So Alice needs to play the first game timidly to lessen her chances of losing the overall draw
game1 and the overall match, and she needs to draw the game Probability chart considering winning
the match as the aim
She can also play the first game boldly and win with a probability of 0.45, but
here losing the game is high with a 0.55 probability, which is not idle
considering if we lose first game, we lose overall match

So Alice needs to play game 1 timidly and draw the game, and play the game 2 boldly and win the game,
which results of winning the overall match with probability of 0.9*0.45 = 0.405

HW1_Q3_Bhargavi Page 2
HW1_Q4_Bhargavi
Sunday, January 29, 2023 9:17 PM

Q4. Bayesian updating and the value of information.


Finding Oscar's
Given: Forest A and Forest B are on either side of the road, and oscar's dog Pablo wanders off into one of the two forests.

Prior probability for the dog to be in forest A is 0.7 and to be in forest B is 0.3
P(A)=0.7 P(B)=0.3
Probability of finding the dog in the forest A is 0.5 and in forest B is 0.8 at the end of the day N, if the dog is in that forest
P( |A)=0.5 P( |B)=0.8

a) Oscar can only look in one forest per day,

So we need to choose a forest that gives highest finding probability if that forest is chosen

P( A)= P( |A)*P(A) = 0.5*0.7 = 0.35


P( B)= P( |B)*P(B) = 0.8*0.3 = 0.24

P( A) > P( B)

Since finding the dog in forest A has highest probability, Oscar should look for the dog in A to maximize the utility

b) If Oscar was not able to find the dog in the first day, and Pablo remains in the same initial forest, in which forest should Oscar look on 2nd day

The probability of the dog to be in A after not finding on day 1, P(A|

And P(B|

The probability of find the dog in A on day2, P( = P( |A)*P(A|


The probability of find the dog in b on day2, P( = P( )*P(B|

On second day Oscar need to search in Forest B to maximize his chances of finding, since P( P(

c) Value of perfect information (in terms of probability) = probability of finding dog after receiving informatio - probability of finding dog without informatio
VOI = (0.7*0.5+0.3*0.8)-(0.54*0.5+0.12*0.8) = 0.224

HW1_Q4_Bhargavi Page 1
HW1_Q5_Bhargavi
Monday, January 30, 2023 10:25 PM

Q5 Expected utility Property

Mr. Campbell's problem

Given: Mr. Campbell has money, m = $4


The utilityty function for money mϵ[0$,10$] is u(m) =

So his current utility will be u(4) =

He is now playing a complex game where at the end of the game he will have either $0, $1, $2 or $10
And the odds for the outcomes are 2:1:1:8

Conerting odds to probability:

We totally have (2+1+1+8=) 12 parts,

In which getting $0 has 2 parts chance that makes Probability of ending the game with $0, P($0) 2/12 = 1/6
Similarly,
For $1 we have 1 parts chance that makes Probability of ending the game with $1, P($1)= 1/12
For $2 we have 1 parts chance that makes Probability of ending the game with $2, P($2)= 1/12

And For $10 we have 8 parts chance that makes Probability of ending the game with $10, P($10)= 8/12=2/3

a) So the utility of the game is:

= 1.70

Mr. Campbell's utility for the game is 1.70

b) Since utility of the game is higher than his current utility(U(4)= 1.25 < U(g) = 1.70), Mr. Campbell should engage in the game

HW1_Q5_Bhargavi Page 1
HW1_Q6_Bhargavi
Monday, January 30, 2023 10:26 PM

Q6. Preference orderings

Given Upon reaching the age of 20, we will be presented with alternatives choices in the form (x,y),
Where x represents the amount of time left to live, x ϵ[0,50]
Where y represents the amount of time left to work, y ϵ[0,50] and x≥y

And x and y are continuous amounts of time and we have infinite number of indifferent choices,

The statement: "You prefer to live longer and work less, but you always prefer to live longer no matter how long you will work"
We prefer our choices based on x, to have live long and the only situation where we consider y is if given choices has same x value,

Even though the preferences satisfy the ordering conditions 01-08(from Resnik - 2-1 section), i.e., transitivity and consistency, we have a infinite
number of indifference classes that makes it impossible to rank the indifference classes and form a utility function using a single real valued
number.

We can only form a utility function with single real valued number if we considered x or y.

For eg:

Consider choice A = (x1,y1) = (45yrs, 30yrs)


choice B = (x2,y2) = (45yrs, 20yrs)
Choice C = (x3,y3) = (50yrs, 10yrs)

Here preference order is C p B p A, where we either looked into the value of x or y when making the preference order and tak ing the values of x
or (50-y) as the reference for the order

(50-y) since we need as less working years as possible.

HW1_Q6_Bhargavi Page 1

You might also like