Professional Documents
Culture Documents
Transformations of Random Variables - Examples: Example 1: Square of Normal
Transformations of Random Variables - Examples: Example 1: Square of Normal
Solution 1: Start with the CDF FY (y) = Pr[Y ≤ y] and insert Y = g(X):
√
FY (y) = Pr[Y ≤ y] = Pr[X 2 ≤ y] = Pr[|X| ≤ y]
√ √ √ √ √ √
= Pr[− y ≤ X ≤ y] = Pr[X ≤ y] − Pr[X ≤ − y] = FX ( y) − FX (− y).
√
where we used Pr[a ≤ X ≤ b] = Pr[X ≤ b] − Pr[X ≤ a] and the fact that X 2 = |X| (and not X). To find
the PDF we simply form the derivative of the CDF with respect to its argument, i.e.,
dFY (y) 1 √ −1 √ 1 √ √
fY (y) = = √ fX ( y) − √ fX (− y) = √ fX ( y) + fX (− y) ,
∂y 2 y 2 y 2 y
where the chain rule was used to evaluate the differential. Finally, insert the known PDF for a standard normal
2
random variable, i.e., fX (x) = √12π e−x /2 . We obtain
1 1 1 1
fY (y) = √ √ e−y/2 + √ e−y/2 = √ √ e−y/2 .
2 y 2π 2π 2π y
What we should not forget is that while X can be in (−∞, ∞), Y = X 2 obviously only takes values in [0, ∞)
so the full answer is
(
√ 1 √ e−y/2 y ≥ 0
fY (y) = 2π y
0 otherwise
1. Find the range of Y inserting the range of X into g(X). Here: X ∈ (−∞, ∞), therefore Y ∈ [0, ∞).
Then we know that fY (y) = 0, ∀y < 0.
√ √ √
2. Solve y = g(x) for x, find all solutions xi (y). Here: x = ± y, i.e., x1 = y, x2 = − y.
√
3. Determine g ′ (xi ) where g ′ (x) = dg(x)/dx. Here: g ′ (x) = 2x, therefore g ′ (x1 ) = 2 y, g ′ (x2 ) =
√
−2 y.
P 1
4. Final solution for fY (y) = i |g′ (x i )|
fX (xi ). Here:
1 √ 1 √ 1 1 1 1
fY (y) = √ fX ( y) + √ fX (− y) = √ √ e−y/2 + √ e−y/2 = √ √ e−y/2
|2 y| | − 2 y| 2 y 2π 2π 2π y
1
Example 2: Max of exponentials
Task: Let X, Y be independent and exponentially distributed random variables with parameters λX , λY , i.e.,
fX (x) = λX · e−λX ·c for x ≥ 0 and similarly fY (x) = λY · e−λY ·c for y ≥ 0.
Find the PDF of Z = max{X, Y }.
As before, to find the PDF we differentiate. Using the product rule of differentiation we obtain
dFZ (z)
fZ (z) = = fX (z) · FY (z) + FX (z) · fY (z).
dz
Since we know that X and Y areR xexponential we can insert their PDFs and CDFs. The PDF was given above,
the CDF evaluates to FX (x) = 0 fX (τ )dτ = 1 − e−λX ·x for x ≥ 0. Consequently, we obtain for Z:
fZ (z) = λX · e−λX ·z · 1 − e−λY ·z + 1 − e−λX ·z λY · e−λY ·z
= λX · e−λX ·z + λY · e−λY ·z − (λX + λY ) · e−(λX +λY )·z
What if they are not independent? In this case we need to integrate the joint PDF over a region that satisfies
(X ≤ z)and(Y ≤ z) . This region can either be constructed from intersecting the half-planes (X ≤ z) and
(Y ≤ z) or by factoring the joint event into (X ≤ z)and(X > Y ) or (Y ≤ z)and(X ≤ Y ) , where now
the two “OR”-ed events are disjoint and hence their probabilities add. Therefore,
For the special case independent variables fX,Y (x, y) factors into fX (x) · fY (y) and hence FZ (z) becomes
FX (z) · FY (z), as mentioned earlier.
2
Example 3: Auxiliary random variable
Task: Consider X, Y to be two mutually independent uniformly distributed p random variables in the interval
[0, 1], i.e., fX,Y (x, y) = 1∀0 ≤ x, y ≤ 1 and 0 otherwise. Show that Z = −2 ln(X) · cos(2πY ) is standard
normal distributed.
Solution:
This one is easiest solved by introducing an auxiliary random variable. Since Z involves the cosine of Y it
may be a good idea to define one that involves the sine of Y so that the determinant of the Jacobian becomes
simple. p
A reasonable choice would be W = −2 ln(X) · sin(2πY ).
Now we go through the usual steps to find fZ,W (z, w):
1. Determine the range for Z and W . It is not difficult to see that for X, Y ∈ [0, 1] we obtain all values
Z, W ∈ (−∞, +∞).
2. Solve for Z and W . It is easy to see that Z 2 + W 2 = −2 ln(X) and W/Z = tan(2πY ) eventually
leading to the solutions
Z 2 +W 2 1 W
X = e− 2 and Y = tan−1
2π Z
p p
3. Compute the Jacobian matrix for g(x, y) = −2 ln(x) · cos(2πy) and h(x, y) = −2 ln(x) · sin(2πy).
We obtain
" # √ −2
p
∂g(x,y) ∂g(x,y) cos(2πy) −2π −2 ln(x) sin(2πy)
∂x ∂y 2x −2 ln(x)
J= = p
∂h(x,y) ∂h(x,y) √ −2 sin(2πy) 2π −2 ln(x) cos(2πy)
∂x ∂y 2x −2 ln(x)
e − z +w
2
| det(J )| 2π
4. Now we can write down the final result for the joint PDF of Z and W .
X 1
fZ,W (z, w) = fX,Y (xi , yi )
| det(J (xi , yi )|
i
1 − z2 +w2
= e 2 .
2π
From fZ,W (z, w) we observe that Z and W are jointly Gaussian and mutually independent. Consequently,
if we marginalize over W we find fZ (z) to be Gaussian as well. Mathematically,
Z ∞ Z ∞ Z
1 − z2 +w2 1 − z 2 ∞ − w2
fZ (z) = fZ,W (z, w)dw = e 2 dw = e 2 e 2 dw
−∞ −∞ 2π 2π −∞
| {z }
√
2π
1 2
− z2
√ 1 2
− z2
= e 2π = √ e ,
2π 2π
which is the desired result.