Download as pptx, pdf, or txt
Download as pptx, pdf, or txt
You are on page 1of 29

MEKELLE UNIVERSITY

ETHIOPIA INSTITUTE OF TECHNOLOGY


SCHOOL OF COMPUTER AND ELECTRICAL ENGINEERING
CHAIR OF COMMUNICATION ENGINEERING

Recursive Least Square error

By
1.Gidey Leul
Recursive Least Square @EITM 2/4/2020
1
CONTENT

• 1. Introduction to Adaptive Filter


• 2.What Recursive Least Square(RLS) ?
• 3.Comparison RLS, LMS And Steepest descent Method?
• 4. Matlab Simulation Results
• 5. Application of RLS?

Recursive Least Square @EITM 2/4/2020 2


INTRODUCTION TO ADAPTIVE FILTER
• An adaptive filter has an adaptation algorithm, that is meant to monitor
the environment and vary the filter transfer function:
• Practical
• Filter Coefficient Unlike fixed chebyshev, Low and
highpass.

Recursive Least Square @EITM 2/4/2020 3


ADAPTIVE FILTERS

Adaptive Filters

Stab
ility
- --
eas IIR Adaptive Filters Recursive Filters
y
FIR Adaptive Filters

Steepest Decesent Least Mean


Method Square RLS

-Computational device that Iteratively model


Recursive Least Square @EITM
r/nsp b/n input & output of a signal. 2/4/2020 4
-minimum mse commn measure estimator quality
Recursive least square
• The objective of the RLS algorithm is to maintain a solution which is optimal at
each iteration. Differentiation of the error leads to the normal equation.
• An algorithm used to recursively finds the coefficients that minimize a weighted
linear
. least squares relating to the input signals.
• .
• .
.
• Different forms of RLS algorithm are;
• each new data point taken to account to modify previous estimate of parameter from some linear to model
obsercorrelation.

RECURSIVE LEAST SQUARE

Growing window Exponentially Sliding Window


RLS algorithm Weighted RLS RLS

-Effectively tracker of nonstationary process 5

RLC-EITM 2/4/2020
RECURSIVE LEAST SQUARE
• Objective : maintaining a solution which is optimal at each iteration. Differentiation of the
error leads to the normal equation:

• Consider the minimization of finite version of the error:

Ƹ(n)=𝑬{ 𝒆 𝒏 𝟐 } Gradient Descent algorithm for mean error square minimization

-Convergence
 - Knowledge of Auto and Cross Correlation
-

• Recursive Least Square

• Error measure without expectation directly from data 𝜺 𝒏 = σ𝒏𝒊=𝟎 𝒆 𝒏 𝟐


Recursive Least Square @EITM 2/4/2020 6
CONTND..

Recursive Least Square @EITM 2/4/2020 7


CONTND….

Mean Square Error Least Square Error

• Produce same set filter coeffcient for all • 𝑀𝑖𝑛𝑖𝑚𝑖𝑧𝑖𝑛𝑔 𝑠𝑞𝑢𝑎𝑟𝑒𝑑 𝑒𝑟𝑟𝑜𝑟 𝑖𝑛𝑑𝑒𝑝𝑒𝑛𝑑𝑒𝑡 𝑥(𝑛)
sequence
• 𝐹𝑜𝑟 𝑑𝑖𝑓𝑓𝑒𝑟𝑒𝑛𝑡 𝑠𝑖𝑔𝑛𝑎𝑙𝑠 𝑑𝑖𝑓𝑓𝑒𝑟𝑒𝑛𝑡 𝑓𝑖𝑙𝑡𝑒𝑟.
• 𝑤𝑛 𝑑𝑒𝑝𝑒𝑛𝑑𝑠 𝑜𝑛 𝑒𝑛𝑠𝑒𝑚𝑏𝑙𝑒 𝑎𝑣𝑒𝑟𝑎𝑔𝑒.
• 𝐷𝑖𝑓𝑓𝑒𝑟𝑒𝑛𝑡 𝑅𝑒𝑎𝑙𝑖𝑧𝑎𝑡𝑖𝑜𝑛 𝑜𝑓 𝑥 𝑛 𝑎𝑛𝑑 𝑑 𝑛
𝑤𝑖𝑙𝑙 𝑙𝑒𝑎𝑑 𝑡𝑜 𝑑𝑖𝑓𝑓𝑒𝑟𝑒𝑛𝑡 𝑠𝑜𝑙𝑢𝑡𝑖𝑜𝑛.


MINIMIZING WEIGHTED LEAST SQUARE ERROR

Recursive Least Square @EITM 2/4/2020 8


CONTND…. RLS
EXPONENTIAL WEIGHTED RLS
• 𝑤𝑛 = 𝑤𝑛 1 , 𝑤𝑛 2 , 𝑤𝑛 3 , . . 𝑊𝑛 𝑝 𝑇 … . that minimize weighted least square
error at time n.
𝟎<𝝀≤𝟏=
• Ƹ(n)=σ𝒏𝒊=𝟎 𝝀𝒏−𝒊 𝒆 𝒏 𝟐,
𝑬𝒙𝒑𝒐𝒏𝒆𝒏𝒕𝒊𝒂𝒍 𝒘𝒆𝒊𝒈𝒉𝒕𝒊𝒏𝒈(𝒇𝒐𝒓𝒈𝒆𝒕𝒕𝒊𝒏𝒈 𝒇𝒂𝒄𝒕𝒐𝒓)

𝑰𝒏 𝒎𝒊𝒏𝒊𝒎𝒊𝒛𝒊𝒏𝒈 Ƹ(n),𝑤𝑛 =Constant


𝒆 𝒊 = 𝒅 𝒊 − 𝒚 𝒊 = 𝒅 𝒊 − 𝒘𝑻 x(i)


Recursive Least Square @EITM 2/4/2020 9
CONTND….
. RLS

• To find the coefficients of Ƹ(n);


𝒅 𝒅
• Ƹ(n)=σ𝒏𝒊=𝟎 𝝀𝒏−𝒊 𝒆 𝒊 𝒆 𝒊 = σ𝒏𝒊=𝟎 𝝀𝒏−𝒊 𝒆 𝒊 𝒙∗ 𝒊 − 𝒌 = 𝟎 ,
𝒅𝑾𝒏∗(𝒌) 𝒅𝑾𝒏∗ 𝒌

• k = 0,1,2 … . p

• Rearranging terms we w’d have


𝒑
• σ𝒊=𝟎 𝒘𝒏 𝒍 𝝀𝒏−𝒊 𝒙 𝒊 − 𝒍 𝒙∗ 𝒊 − 𝒌 = σ𝒏𝒊=𝟎 𝝀𝒏−𝒊 𝒅 𝒊 𝒙∗ 𝒊 − 𝒌 ]

Recursive Least Square @EITM 2/4/2020 10


CONTND…. RLS
𝑹𝒙 𝒏 = σ𝒏𝒊=𝟎 𝝀𝒏−𝒊 𝒆 𝒊 𝒙∗ 𝒊 𝒙𝑻 (𝒊)

𝑿 𝒊 = 𝒙 𝒊 ,𝒙 𝒊 − 𝟏 ,…..𝒙 𝒊 − 𝒑 𝑻

𝑹𝒙 𝒏 𝑾𝒏 = 𝒓𝒅𝒙(𝒏)
𝒓𝒅𝒙 𝒏 , 𝒅𝒆𝒕𝒆𝒓𝒓𝒎𝒊𝒏𝒊𝒔𝒕𝒊𝒄 𝒄𝒓𝒐𝒔𝒔𝒄𝒐𝒓𝒓𝒆𝒍𝒂𝒕𝒊𝒐𝒏 𝒅 𝒊 𝒂𝒏𝒅 𝒙∗ (i)
𝒓𝒅𝒙 𝒏 =σ𝒏𝒊=𝟎 𝝀𝒏−𝒊 𝒅 𝒊 𝒙∗ 𝒊 Let us evaluate
minimum mean square error:
.
Ƹ(n)=σ𝒏𝒊=𝟎 𝝀𝒏−𝒊 𝒆 𝒊 . 𝒆∗ 𝒊 = σ𝒏𝒊=𝟎 𝝀𝒏−𝒊 𝒆 𝒊 . {𝒅 𝒊 − σ𝒑𝒊=𝟎 𝑾𝒏 𝒍 𝑿(𝒊 − 𝒍)}^ ∗

𝑾𝒏 𝒍 that minimize the square error


Ƹ(n) min=σ𝒏𝒊=𝟎 𝝀𝒏−𝒊 𝒆 𝒊 . 𝒅∗ 𝒊 =
𝒏 𝒑
Recursive Least Square @EITM
෍ 𝝀𝒏−𝒊 (𝒅 𝒊 − ෍ 𝑾𝒏 𝒍 𝑿(𝒊 − 𝒍)}𝒅 ∗ (𝒊).
𝒊=𝟎 𝒊=𝟎 11
2/4/2020
CONTND…. RLS

• Let us drive recursive solution


• Wn=𝑾𝒏−𝟏 + ∆𝑾𝒏𝟏, 𝑪𝒐𝒓𝒓𝒆𝒄𝒕𝒊𝒐𝒏
𝒂𝒑𝒑𝒍𝒊𝒆𝒅 𝒕𝒐 𝒕𝒉𝒆 𝒔𝒐𝒍𝒖𝒕𝒊𝒐𝒏
𝒂𝒕 𝒕𝒊𝒎𝒆 𝒏 − 𝟏.
𝑾 𝒏 = 𝑹−𝟏 𝐱 𝐧 𝐫𝐝𝐱(𝐧)
rdx n = 𝜆rdx(n − 1) + 𝑑 𝑛 𝑥 ∗ 𝑛
Rx n = 𝜆 Rx n − 1 + 𝑥 𝑛 ∗ 𝑥 𝑛 𝑇
Recursive Least Square @EITM 2/4/2020 12
CONTND…. RLS
Apply Woodburry’s identity

H −1 A−1 uvH A−1


• A + uv = A−1 − ,, , A = 𝜆 Rx n − 1 𝑢 = 𝑣 = 𝑥∗ 𝑛
1+𝑉 𝐻 𝐴−1 𝑢

• Rx 𝒏 −𝟏
=

Recursive Least Square @EITM 2/4/2020 13


CONTND…. RLS

• With the recursive definition of 𝑷 𝒏 𝒕𝒉𝒆 𝒅𝒆𝒔𝒊𝒓𝒆𝒅 𝒇𝒐𝒍𝒍𝒐𝒘𝒔:



𝐠 𝐧 =𝒙 𝒏 𝑷 𝒏 RX(𝐧)𝐠 𝐧 = 𝒙∗ 𝒏
• The time update equation for the coefficient vector Wn

𝑾𝒏 = 𝒑(𝒏)𝒓𝒅𝒙(𝒏)
• 𝑊 n = 𝒑(𝒏)𝜆rdx(n − 1) + 𝒑(𝒏) 𝑥 ∗ 𝑛 𝑑 𝑛 ,

..

14

Recursive Least Square @EITM 2/4/2020


CONTND…. RLS

• 𝑾𝒏 = 𝑾𝒏−𝟏 + 𝜶 𝒏 𝒈 𝒏 , 𝑭𝒊𝒏𝒂𝒍 𝑷𝒐𝒊𝒏𝒕 𝒐𝒇 𝒕𝒉𝒆 𝒖𝒑𝒅𝒂𝒕𝒆 𝒆𝒒𝒖𝒂𝒕𝒊𝒐𝒏 𝑹𝒆𝒂𝒄𝒉𝒆𝒅.


• 𝜶 𝒏 = 𝒅 𝒏 − 𝒙𝑻 𝒏 𝑾𝒏 − 𝟏 −−→ 𝑷𝒓𝒊𝒐𝒓𝒊 𝒆𝒓𝒓𝒐𝒓, 𝒆 𝒏 = 𝒅 𝒏 − 𝒙𝑻 𝒏 𝑾𝒏
• This means We found the Correction factor with 𝜶 𝒏 variable parameter.
= RLS algorithm
…….Exponential Weighted RLS

For 𝜆 = 1, Growing window RLS algorithm

RLS algorithm: Recursive updating vector wn & the inverse autocorrelation p(n).
Recursive Least Square @EITM 2/4/2020 15
SLIDING WINDOW RLS
• Minimizes Exponentially Weighted least square error

Ƹ(n)=σ𝒏𝒊=𝟎 𝝀𝒏−𝒊 𝒆 𝒏 𝟐
𝝀=𝟏

Using Similar Procedure with the previous……..


Recursive Least Square @EITM 2/4/2020 16
.

Recursive Least Square @EITM 2/4/2020 17


• Recursive least squares (RLS) is an adaptive filter algorithm that recursively finds
the coefficients that minimize a weighted linear least squares cost function relating to
the input signals. This approach is in contrast to other algorithms such as the least
mean squares (LMS) that aim to reduce the mean square error. In the derivation of the
RLS, the input signals are considered deterministic, while for the LMS and similar
algorithm they are considered stochastic. Compared to most of its competitors, the
RLS exhibits extremely fast convergence. However, this benefit comes at the cost of
high computational complexity.

Recursive Least Square @EITM 2/4/2020 18


Recursive Least Square @EITM 2/4/2020 19
APPLICATION OF RLS

Recursive Least Square @EITM 2/4/2020 20


APPLICATION OF RLS
• Image Processing: Enhance and restore data by removing noise

• Medical

Recursive Least Square @EITM 2/4/2020 21


CONTND….

• Note that we can now write recursive-in-time equations for R and g:

R n  R n1  x n x tn
gn  gn1  d (n)x n
• We seek solutions of the form:

fn  R -n1gn
fn 1  R -n11gn 1
• We can apply the matrix inversion lemma for
computation of the inverse:

Recursive Least Square @EITM 2/4/2020 22


CONTND….

R -n1x n 1x tn 1R -n1


fn 1  [R  -1
][gn  d (n  1) x n 1 ]
1  x n 1R n x n 1
n t -1

• Define an intermediate Vector variable; z  R -n1x n 1

• Define another intermediate Scalar variable k  x n


t
1z

zx f t
d (n  1)zx z t
fn1  fn   d (n  1)z 
n 1 n n 1

1 x z t
n 1 1  x tn1z
• Define the a priori error as:

e(n  1 / n)  d (n  1)  f x t
n n 1
reflecting that this is the error obtained using the
old filter and the new data.
Recursive Least Square @EITM 2/4/2020 23
COTND…

• Using this definition, we can rewrite the RLS algorithm update equation as:

1
fn1  fn  e(n  1 / n)R n x n1
-1

1  x n1R n x n1
t -1

Recursive Least Square @EITM 2/4/2020 24


SUMMARY
1) Initialize
f1 , R -11
2) Iterate for n = 0, 1, …

e(n  1 / n)  d (n  1)  f x n 1 t
n
.
α(n) 
1 fn1  fn   (n)e(n  1 / n)R x -1
n n 1
1  x tn 1R -1
n x n 1

R  R   (n)R x x R
-1
n 1
-1
n
-1 t -1
n n 1 n 1 n

Recursive Least Square @EITM 2/4/2020 25

The RLS algorithm can be expected to converge more quickly because the use
of an aggressive, adaptive step size.
• RLS VS NEWTON METHOD
R -1
x x t
R -1
R n1  (R n  x n1 x n1 )  R n 
-1 t 1 -1 n n 1 n 1 n

1  x n1R n x n1
t -1

e( n )  d ( n )  f x n t
n

fn 1  fn  R e(n) x n -1
n

• The RLS algorithm can be expected to converge more


quickly because the use of an aggressive, adaptive step
size.
Recursive Least Square @EITM 2/4/2020 26
EXPONENTIALLY-WEIGHTED RLS ALGORITHM
n
~
• We define a weighted error function: J n   n l e 2 (l )
l 0

• This gives more weight to the most recent errors.

• The RLS algorithm can be modified in this case:

1) Initialize f1 , R 1
-1
2) Iterate for n=1,2,3….
e(n / n  1)  d (n  1)  fnt x n 1 α(n) 
1
  x tnR -n11 x n
fn  fn1   (n)e(n / n  1)R x
-1
n 1 n

 
1 -1 • RLS is computationally more complex than simple LMS because it is
R  R n1   (n)R -n11 x n1 x tn1R -n11
-1
O(L2).

n
• In principle, convergence is independent of the eigenvalue structure of
Recursive Least Square @EITM
the signal due to the premultiplication by the inverse of the
autocorrelation matrix. 2/4/2020 27
APPLICATION OF RLS

Recursive Least Square @EITM 2/4/2020 28


.

Thank you for your sincere


Attention!!!!

Recursive Least Square @EITM 2/4/2020 29

You might also like