Professional Documents
Culture Documents
Module 1 - Lesson 1 (Part 2) : Weighted Least Squares
Module 1 - Lesson 1 (Part 2) : Weighted Least Squares
vs.
Method of Weighted Least Squares
Consider the general linear measurement model for m measurements and n
unknowns:
y1 x1 v1
⋮ =H ⋮ + ⋮
ym xn vm
y = Hx + v
In regular least squares, we implicitly assumed that each noise term was of equal
variance:
σ2 0
2 2
𝔼[vi ] =σ (i = 1,…, m) R = 𝔼[vvT ] = ⋱
2
0 σ
Method of Weighted Least Squares
If we assume each noise term is independent, but of different variance,
σ12 0
2 2
𝔼[vi ] = σi , (i = 1,…, m) T
R = 𝔼[vv ] = ⋱
2
0 σm
Then we can define a weighted least squares criterion as:
ℒWLS(x) = e R e T −1 e1 y1 x1
e12 e22 em2 where ⋮ =e= ⋮ −H ⋮
= + +…+ em ym xn
σ12 σ22 σm2
The higher the expected noise, the lower the weight we place on the
measurement.
Re-deriving regular least squares
What happens if all of our variances are the same?
2 2 2
e1 e2 em
ℒWLS(x) = + +…+
σ2 σ2 σ2
1 2 2
= 2 (e1 + … + em)
σ
Since our variance is constant, it will not affect our final estimate! This results in
the same estimate as regular least squares:
2
σ1 = 2
σ2 = 2
…σm = σ → x̂WLS = x̂LS = argmin x ℒLS(x) = argmin x ℒWLS(x)
2
Minimizing the Weighted Least Squares
Criterion
Expanding our new criterion,
ℒWLS(x) = eT R−1e
T −1
= (y − Hx) R (y − Hx)
We can minimize it as before, but accounting for the new weighting term:
∂ℒ T −1 T T −1
x̂ = argmin x ℒ(x) ̂
=0=−y R H+x H R H
∂x x=x̂
HT R−1Hx̂WLS = HT R−1y
The weighted normal equations!
Method of Weighted Least Squares
The weighted normal equations
T T −1
Loss / Criterion ℒLS(x) = e e ℒWLS(x) = e R e
m≥n m≥n
Limitations
σi2 > 0