Download as pdf or txt
Download as pdf or txt
You are on page 1of 14

Properties of Expectation

Chien-Tong Lin

Department of Statistics
Feng Chia University

1/9
Outline

1. Expectation
2. Variance, covariance and correlations
3. Conditional expectations
4. Moment generating functions
5. Additional properties of Normal random variables.

2/9
Expectation
bydefinition
Properties
Egifǒgcx 彬 dx )

1. For a continuous random variable X and a function g,


Z 1 Z 0
E[g(X)] = P (g(X) > t)dt P (g(X)  t)dt.
0 1

2. For two continuous random variables X and Y having joint p.d.f. f (x, y). Then
for a function g(x, y), we have
Z Z
E[g(X, Y )] = g(x, y)f (x, y)dxdy.
Sx Sy

(Ch7:Prop2.1)

kxfidx
fsitt
Remarks:
1. Let g(X, Y ) = X + Y . Then in continuous case,
Z Z
E[X + Y ] = (x + y)f (x, y)dxdy
Sx Sy
Z Z Z Z
= x f (x, y)dydx + y f (x, y)dxdy = E[X] + E[Y ].
Sx Sy Sy Sx

2. The discrete case can be derived similarly.

3/9
EX-fplx.tldt-fiplxctldt-xi-ifplx.tl
˙

dt
Chenxisapositivenvj
tgydxdt Notethat

ffdt 彬 dx
=
䂪 ftfdx ,

-
fxfxadx ,


fi Plxctldt

tih Tdxdt
秘收
affdt

fǒxfxlxldx
Byidi

fPM dt-fpcx tjd-fxandx-fixldxl-fxtadxtfixndx.fi


xfcxldx = EX
*
5. 9,3 , 回 11,1 4,8 4,17,6
, ,

1 :
15.9.3 ,
8,4 , 四 1011 1,14四 }
2 :
15,3 四 6.19.8 }
, 10 1 1 1,1 4,191
3:13 了 4,15 } 6,1 9,8 了 10 { 11,1 4,17 }
Exercise
:
1. Ch7:Ex2m (Quick sort)
平均 ⽽⾔ 需 做 幾 次 ? ,

2. Let X be a geometrically distributed random variable. Find E[Y ] where


Y = min{X, M } with M > 0.
EX
※ thenumberofconparisonitahes.tl
let-IGj -lifci.jaredirectlycompared.otherwise.int
n

剩⼆点 在 Icn

4/9
Quicksort : ? EI Gijklxplanpared )
叫 "
toxpcotherwie )
EX 2 点 㾵 EI Gij )

無意 pli.jaredirectlyanpar.gl
* 非 EÌ 2

EIFM Tl

ftp.i-ngi.jn tmebratxpci.jaremthesamebracket
* :
Remavk :
njjcanbecomparedonlywhen
A (ii) areinxhesamebrackee
fB
:

eitheriorjissekctedrandomly.i.pl
:

i. jbecompared )

tpliorjbesekctedli-jarenot.me
samebracketj 0

xplijarenotbiiymeeren

_
Fiti 2

i j _
s.EC
無意 -2 Gádernislarge

↑ 偷 -2 dxdy

f alogcx-gtDljdg-fzl ogl-t D.bg


27dg
Zinytl
=
f 2
2 ( log Z -1g 27 dz
y :
1 → n
-
1

ZJF
Z : n → 2
( 的
.

=
2 王 2
dz -_-
dy

-26g2 xch -2 )

=
2 (n_n)
bgln -2) -
2 (n -
2 )

-46g 2-4-z.cn ygz _ uhennislarge


=
0( nlogn )
-
zcnzjloglm ) ≈2 rllogn
Quicksorthas z.cn -2) 《
nlogn
0 ( nlgn) conpkxityn 4kg2 +4 《
ngn
acnylg 2 ccnlogh
⽣ 0的
rx~GeocptmdEfwherefminlX.MX
EYEĚYIE Misapoiecorotatr

⼆点 PCY ≥ g)

⼆点 plmnfxǐy ≥ y,

都 PCX y pauebyyadf

⼆ ≥
,

ctpy !
"1
M
"们 ⼀

仰)
⼆点 PCX 7)
⼆点
≥ _

p ≠1
Variance, covariance and correlations

Definition
I The covariance between X and Y , denoted by Cov(X, Y ), is defined by
Cov(X, Y ) = E[(X E[X])(Y E[Y ])] = E[XY ] E[X]E[Y ].

I The correlation of two random variables X and Y , denoted by ⇢(X, Y ) is


defined as
Cov(X, Y )
⇢(X, Y ) = p .
Var(X)Var(Y )

Properties
1. Cov(X, Y ) = Cov(Y, X)
2. Cov(X, X) = Var(X)
3. Cov(aX, Y ) = aCov(X, Y )
P P P P
4. Cov( i Xi , j Yj ) = i j Cov(Xi , Yj ).
P P PP
5. Var( i Xi ) = i Var(Xi ) + 2 i<j Cov(Xi , Xj ).

6. 1  ⇢(X, Y )  1.

5/9
.tt?MiD=Yftl,varX-i
f varl-i0Evar1
炎 + )
=

Évardtfvar(f) ÉGNXY ! ⼗

2
⼗点 Gvxid
=

kltpcxìaj

2

plxil ) ≥ -1

0 Evarl ※ f) -

fvalkltfurcy) ⼀点 GCXY !
=
2 -
2 PGY)
plxil) ≤ 1
#
Exercise
1. Ch7:Ex4a (sample variance)
2. Ch7:Ex4e
Will Cov(Xi X̄, X̄) = 0 implies (Xi X̄) ? X̄ ?
2.1 Suppose that X1 , X2 are i.i.d. U ni(0, 1). Let Y = X1 X̄. Does Y ? X̄?
2.2 Suppose that X1 , X2 , . . . , Xn are i.i.d. N (µ, 2 ). Let Yi = Xi X̄. Does
Yi ? X̄ for any i = 1, . . . , n? (we will show it later.)
3. Ch7:Ex4f

6/9
Conditional Expectation

Definition
1. For continuous random variables X and Y , the conditional expectation of X
given Y = y is given by
Z
E[X|Y = y] = xfX|Y (x|y)dx.
x2SX

Properties
R
1. E[g(X)|y] = SX g(x)fX|Y (x|y)dx.
2. E[X] = E[E[X|Y ]].
3. Var(X) = E[Var(X|Y )] + Var(E[X|Y ])
4. The best function g(X) that minimize E[(Y g(X))2 ] is g(X) = E[Y |X].

Remarks:
1. When both X and Y are discrete random variables, the above definition and
properties can be obtained similarly.

7/9
Exercise
1. Ch7:Ex5b, Ch7:Ex5q
2. Ch7:Ex5f, Ch7:Ex5j
3. Given g(X) = a + bX, find the best linear predictor of Y in the sense of mean
square error E[(Y g(X))2 ]. That is, find a and b that minimize
E[(Y (a + bX))2 ]. Further, evaluate the mean square error of the predictor.

We say that the random variables X, Y have a bivariate normal distribution if, for
constants µx , µy , x > 0, y > 0, 1 < ⇢ < 1, their joint p.d.f. is given by
1 n 1 h io
f (x, y) = exp 2
Zx2 + Zy2 2⇢Zx Zy ,
2⇡ x y 2(1 ⇢ )
x µx y µy
where Zx = , Zy = , and ⇢ is the correlation between X and Y .
x y

1. Suppose that ⇢ = 0, will it imply that X ? Y ? Is this true for general


distribution function ?
2. Suppose that X ? Y , will it imply that ⇢ = 0 ? Is this true for general
distribution function ?

8/9
Moment Generating Functions

9/9

You might also like