Download as pdf or txt
Download as pdf or txt
You are on page 1of 2

Integrals L'Hopital's rule

𝑑𝑣 𝑑𝑢 𝑓(𝑥) 0 ∞ 𝑓(𝑥) 𝑓′ (𝑥)


IBP: ∫ 𝑢
𝑑𝑥
𝑑𝑥 = 𝑢𝑣 − ∫ 𝑣
𝑑𝑥
𝑑𝑥 If lim equals or then lim = lim
𝑥→𝑎 𝑔(𝑥) 0 ∞ 𝑥→𝑎 𝑔(𝑥) 𝑥→𝑎 𝑔′ (𝑥)
𝑓′ (𝑥)
Reverse Chain Rule: ∫ 𝑑𝑥 = 𝑙𝑛|𝑓(𝑥)|
𝑓(𝑥)

Double Integrals Chain Rule


𝑑 𝑏 𝑏 𝑑 1. Two intermediate and one independent variable
1. ∫𝑐 ∫𝑎 𝑓(𝑥, 𝑦)𝑑𝑥 𝑑𝑦 = ∫𝑎 ∫𝑐 𝑓(𝑥, 𝑦)𝑑𝑦 𝑑𝑥
𝑑 𝑏 𝑑
• 𝑤 = 𝑓(𝑥, 𝑦)
2. ∫𝑐 ∫𝑎 𝑓(𝑦)𝑑𝑥𝑑𝑦 = (𝑏 − 𝑎) ∫𝑐 𝑓(𝑦)𝑑𝑦 • 𝑥 = 𝑥(𝑡), 𝑦 = 𝑦(𝑡)
𝑑 𝑏 𝑏 𝑑𝑤 𝜕𝑓 𝑑𝑥 𝜕𝑓 𝑑𝑦
3. ∫𝑐 ∫𝑎 𝑓(𝑥)𝑑𝑥𝑑𝑦 = (𝑑 − 𝑐) ∫𝑎 𝑓(𝑥)𝑑𝑦 → = +
𝑑𝑡 𝜕𝑥 𝑑𝑡 𝜕𝑦 𝑑𝑡
𝑑 𝑏 𝑏 2. Two intermediate and two independent variables
4. ∫ ∫ 𝑔(𝑥) + ℎ(𝑦) 𝑑𝑥𝑑𝑦 = (𝑑 − 𝑐) ∫ 𝑔(𝑥)𝑑𝑥 +
𝑐 𝑎 𝑎 • 𝑤 = 𝑓(𝑥, 𝑦)
𝑑
(𝑏 − 𝑎) ∫ ℎ(𝑦) 𝑑𝑦 • 𝑥 = 𝑥(𝑟, 𝑠), 𝑦 = 𝑦(𝑟, 𝑠)
𝑐 𝑑𝑤 𝜕𝑓 𝜕𝑥 𝜕𝑓 𝜕𝑦 𝑑𝑤 𝜕𝑓 𝜕𝑥 𝜕𝑓 𝜕𝑦
𝑑 𝑏 𝑏 𝑑 → = + → = +
5. ∫𝑐 ∫𝑎 𝑔(𝑥)ℎ(𝑦) 𝑑𝑥𝑑𝑦 = ∫𝑎 𝑔(𝑥)𝑑𝑥 ∫𝑐 ℎ(𝑦) 𝑑𝑦 𝑑𝑟 𝜕𝑥 𝜕𝑟 𝜕𝑦 𝜕𝑟 𝑑𝑠 𝜕𝑥 𝜕𝑠 𝜕𝑦 𝜕𝑠

Taylor Polynomials Partial Derivatives


𝑓𝑛 (𝑎) 𝜕2 𝑓 𝜕2 𝑓 𝜕2 𝑓 𝜕2 𝑓
Definition: 𝑓(𝑥) = ∑∞
𝑛=0 (𝑥 − 𝑎)𝑛 If , are continuous, then ,
𝑛! 𝜕𝑥𝜕𝑦 𝜕𝑥𝜕𝑦 𝜕𝑥𝜕𝑦 𝜕𝑥𝜕𝑦

Productivity Total Differential


𝜕𝑓 𝜕𝑓 𝜕𝑓
Marginal Productivity of labour: Formula: 𝑑𝑧 = (𝑥, 𝑦)𝑑𝑥 + (𝑥, 𝑦)𝑑𝑦
𝜕𝑥 𝜕𝑥 𝜕𝑦
𝜕𝑓
Marginal Productivity of capital: • dx = change in x
𝜕𝑦
• dy = change in y
• dz = approximate change in function

Lagrange Multipliers Present Value


• Find all (𝑥, 𝑦, 𝜆) that satisfy: Present Value Finite Term Forever
→ ∇𝑓(𝑥, 𝑦) = 𝜆∇𝑔(𝑥, 𝑦) on [0, T] on [0, ∞]

→ 𝑔(𝑥, 𝑦) = 0 General Income 𝑇

• Compare
𝜕𝑓
(𝑥, 𝑦) = 𝜆
𝜕𝑔
(𝑥, 𝑦) and
𝜕𝑓
(𝑥, 𝑦) = Stream R(t) ∫ 𝑅(𝑡)𝑒−𝑟𝑡 𝑑𝑡 ∫ 𝑅(𝑡)𝑒−𝑟𝑡 𝑑𝑡
𝜕𝑥 𝜕𝑥 𝜕𝑦 0 0

𝜆
𝜕𝑔
(𝑥, 𝑦) with 𝑔(𝑥, 𝑦) = 0 m equal 𝑚𝑃 𝑚𝑃
𝜕𝑦 payments per (1 − 𝑒−𝑟𝑇 )
𝑟 𝑟
Gradient year.
𝜕𝑓 𝜕𝑓 Amount P each.
For a function in two variables, the gradient is ∇𝑓 = ( , ) • r is the rate of interest
𝜕𝑥 𝜕𝑦

Determining Relative Extrema Expected Value


1. Find critical points (a, b) by solving both: The probability weighted average of a random variable
𝜕𝑓 ∞
• (𝑎, 𝑏) = 0 1. 𝐸(𝑥) = ∫−∞ 𝑥𝑓(𝑥)𝑑𝑥
𝜕𝑥
𝜕𝑓
• (𝑎, 𝑏) = 0 Variance
𝜕𝑦
2. Second Derivative Test 𝐷(𝑥, 𝑦) = 𝑓𝑥𝑥 𝑓𝑦𝑦 − 𝑓𝑥𝑦
2
1. 𝑉𝑎𝑟(𝑥) = 𝐸[𝑥 2 ] − (𝐸[𝑥])2 easier
• 𝐷(𝑎, 𝑏) > 0 𝑎𝑛𝑑 𝑓𝑥𝑥 (𝑎, 𝑏) < 0 Relative Max 2. 𝑉𝑎𝑟(𝑥) = 𝐸[(𝑥 − 𝐸[𝑥])2 ]
at (a, b) 3. 𝑉𝑎𝑟(𝑥) = 𝜎 2
• 𝐷(𝑎, 𝑏) > 0 𝑎𝑛𝑑 𝑓𝑥𝑥 (𝑎, 𝑏) > 0 Relative Min
at (a, b)
• 𝐷(𝑎, 𝑏) < 0 Saddle Point
• 𝐷(𝑎, 𝑏) = 0 Inconclusive.
EV Properties Var Properties
1. 𝐸(𝑐) = 𝑐 1. 𝑉𝑎𝑟(𝑥 ± 𝑎) = 𝑉𝑎𝑟(𝑥)
2. 𝐸(𝑎𝑥) = 𝑎𝐸(𝑥) 2. 𝑉𝑎𝑟(𝑎𝑥) = 𝑎2 𝑉𝑎𝑟(𝑥)
3. 𝐸(𝑥 + 𝑎) = 𝐸(𝑥) + 𝑎 3. 𝑉𝑎𝑟(𝑥 2 ) = 𝑉𝑎𝑟(𝑥) + 𝐸(𝑥)2
4. 𝐸(𝑥 + 𝑦) = 𝐸(𝑥) + 𝐸(𝑦) 4. If x and y independent:
5. If x and y independent: • 𝑉𝑎𝑟(𝑥 + 𝑦) = 𝑉𝑎𝑟(𝑥) + 𝑉𝑎𝑟(𝑦)
• 𝐸(𝑥𝑦) = 𝐸(𝑥)𝐸(𝑦)
6. 𝐸(𝐸(𝑥)) = 𝐸(𝑥)

PDF Cumulative Distribution Function (CDF)


1. 𝑓(𝑥) ≥ 0 𝑓𝑜𝑟 𝑎𝑙𝑙 𝑥 Represents the probability that x is less than or equal to t.

2. ∫−∞ 𝑓(𝑥)𝑑𝑥 = 1 Where 𝑓(𝑥) is a PDF.
𝑡
Joint PDF 𝐹(𝑡) = 𝑃(𝑥 ≤ 𝑡) = ∫ 𝑓(𝑥) 𝑑𝑥
−∞
1. 𝑓(𝑥, 𝑦) ≥ 0 𝑓𝑜𝑟 𝑎𝑙𝑙 𝑥, 𝑦
2.

∬−∞ 𝑓(𝑥, 𝑦)𝑑𝑥𝑑𝑦 = 1
CDF Properties:
1. 0 ≤ 𝐹(𝑥) ≤ 1
2. 𝐹 ′ (𝑥) = 𝐹(𝑥) ≥ 0
3. 𝐹(𝑥) non-decreasing
4. lim 𝐹(𝑥) = 0 and lim 𝐹(𝑥) = 1
𝑥→−∞ 𝑥→∞
5. 𝑃(𝑎 < 𝑥 ≤ 𝑏) = 𝐹(𝑏) − 𝐹(𝑎)

Covariance Correlation
1. 𝐶𝑜𝑣(𝑥, 𝑦) = 𝐸[(𝑥 − 𝐸(𝑥))(𝑦 − 𝐸(𝑦))] The correlation is a measure for the degree to which large
2. Symmetry: values of X tend to be associated with large values of Y .
• 𝐶𝑜𝑣(𝑥, 𝑦) = 𝐶𝑜𝑣(𝑦, 𝑥) 𝐶𝑜𝑣(𝑥,𝑦)
𝑃(𝑥, 𝑦) = -1 ≤ 𝑝(𝑥, 𝑦) ≤ 1
• 𝑃(𝑥, 𝑦) = 𝑃(𝑦, 𝑥) √𝑉𝑎𝑟(𝑥)𝑉𝑎𝑟(𝑦)
3. Linearity: Variance and Standard Deviation (σ)
• 𝐶𝑜𝑣(𝑎𝑥1 + 𝑏𝑥2 + 𝑐, 𝑦) = 𝑎𝐶𝑜𝑣(𝑥1 , 𝑦) +
1a. 𝑉𝑎𝑟(𝑐𝑥) = 𝑐 2 𝑉𝑎𝑟(𝑥)
𝑏𝐶𝑜𝑣(𝑥2 , 𝑦)
1b. 𝜎(𝑐𝑥) = |𝑐|𝜎(𝑥)
4. 𝐶𝑜𝑣(𝑥, 𝑥) = 𝑉𝑎𝑟(𝑥)
2a. 𝑉𝑎𝑟(𝑐) = 0
5. 𝑉𝑎𝑟(𝑥 + 𝑦) = 𝑉𝑎𝑟(𝑥) + 𝑉𝑎𝑟(𝑦) + 2𝐶𝑜𝑣(𝑥, 𝑦)
2b. 𝜎(𝑐) = 0
3a. 𝑉𝑎𝑟(𝑥 + 𝑐) = 𝑉𝑎𝑟(𝑥)
3b. 𝜎(𝑥 + 𝑐) = 𝜎(𝑥)

Normal Distribution Markov’s Inequality


The most important distribution. If 𝑥 ≥ 0 and 𝑐 > 0, then
𝐸(𝑥)
PDF of a normal distribution: 𝑃(𝑥 ≥ 𝑐) ≤
1 𝑥−𝜇 2 𝑐
− ( )
𝑒 2 𝜎 Chebyshev’s Inequality:
𝑓(𝑥 ) = −∞ < 𝑝(𝑥, 𝑦) < ∞ If 𝑥 is a r.v. with a finite mean 𝜇 and variance of 𝜎 2 then for
𝜎√2𝜋
any number 𝜀 > 0:
Transformation 𝜎2
A normally distributed random value 𝑥 can be transformed 𝑃(|𝑥 − 𝜇| ≥ 𝜀) ≤
into a standard normal by: 𝜀2
𝑥−𝜇
𝑧=
𝜎
𝑎−𝜇 𝑏−𝜇
𝑃(𝑎 ≤ 𝑥 ≤ 𝑏) = 𝑃( ≤𝑧≤ )
𝑟 𝜎

You might also like