Professional Documents
Culture Documents
Univariate Smoothing
Univariate Smoothing
Univariate Smoothing
• Problem definition • Smoothing Problem: Given a data set with a single input
• Interpolation variable x, find the best function ĝ(x) that minimizes the
prediction error on new inputs (probably not in the data set)
• Polynomial smoothing
• Cubic splines • Interpolation Problem: Same as the smoothing problem except
the model is subject to the constraint ĝ(xi ) = yi for every
• Basis splines input-output pair (xi , yi ) in the data set
• Smoothing splines – Linear Interpolation: Use a line between each pair of points
• Bayes’ rule – Nearest Neighbor Interpolation: Find the nearest input in
• Density estimation the data set and use the corresponding output as an
approximate fit
• Kernel smoothing
– Polynomial Interpolation:Fit a polynomial of order n − 1 to
• Local averaging n
input output data: ĝ(x) = i=1 wi xi−1
• Weighted least squares – Cubic Spline Interpolation: Fit a cubic polynomial with
• Local linear models continuous second derivatives in between each pair of points
• Prediction error estimates (more on this later)
J. McNames Portland State University ECE 4/557 Univariate Smoothing Ver. 1.25 1 J. McNames Portland State University ECE 4/557 Univariate Smoothing Ver. 1.25 2
−0.5
−1
−1.5
−2
0 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 1
Input x
J. McNames Portland State University ECE 4/557 Univariate Smoothing Ver. 1.25 3 J. McNames Portland State University ECE 4/557 Univariate Smoothing Ver. 1.25 4
Example 1: MATLAB Code % ================================================
% Nearest Neighbor Interpolation
% ================================================
figure ;
% function [] = Interpolation ();
FigureSet (1 , ’ LTX ’ );
yh = interp1 (x ,y , xt , ’ nearest ’ );
close all ;
h = plot ( xt , yh , ’b ’ ,x ,y , ’ r. ’ );
set (h , ’ MarkerSize ’ ,8);
N = 15;
set (h , ’ LineWidth ’ ,1 .2 );
rand ( ’ state ’ ,2);
xlabel ( ’ Input x ’ );
x = rand (N ,1);
ylabel ( ’ Output y ’ );
y = sin (2* pi *2* x. ^2) + 0 .2 * randn (N ,1);
title ( ’ Chirp Nearest Neighbor Interpolation ’ );
xt = (0:0 .0001 :1) ’; % Test inputs
set ( gca , ’ Box ’ , ’ Off ’ );
grid on ;
% ================================================
% Linear Interpolation axis ([0 1 -2 2]);
% ================================================ AxisSet (8);
figure ; print - depsc I n t e r p o l a t i o n N e a r e s t N e i g h b o r ;
FigureSet (1 , ’ LTX ’ );
yh = interp1 (x ,y , xt , ’ linear ’ ); % ================================================
h = plot ( xt , yh , ’b ’ ,x ,y , ’ r. ’ ); % Polynomial Interpolation
set (h , ’ MarkerSize ’ ,8); % ================================================
set (h , ’ LineWidth ’ ,1 .2 ); A = zeros (N , N );
xlabel ( ’ Input x ’ ); for cnt = 1: size (A ,2) ,
ylabel ( ’ Output y ’ ); A (: , cnt ) = x. ^( cnt -1);
title ( ’ Chirp Linear Interpolation ’ ); end ;
set ( gca , ’ Box ’ , ’ Off ’ ); w = pinv ( A )* y ;
grid on ; At = zeros ( length ( xt ) , N );
axis ([0 1 -2 2]); for cnt = 1: size (A ,2) ,
AxisSet (8); At (: , cnt ) = xt. ^( cnt -1);
print - depsc I n t e r p o l a t i o n L i n e a r ; end ;
J. McNames Portland State University ECE 4/557 Univariate Smoothing Ver. 1.25 5 J. McNames Portland State University ECE 4/557 Univariate Smoothing Ver. 1.25 6
yh = At * w ; grid on ;
axis ([0 1 -2 2]);
figure ; AxisSet (8);
FigureSet (1 , ’ LTX ’ ); print - depsc I n t e r p o l a t i o n C u b i c S p l i n e ;
h = plot ( xt , yh , ’b ’ ,x ,y , ’ r. ’ );
set (h , ’ MarkerSize ’ ,8); % ================================================
set (h , ’ LineWidth ’ ,1 .2 ); % Cubic Spline Interpolation
xlabel ( ’ Input x ’ ); % ================================================
ylabel ( ’ Output y ’ ); figure ;
title ( ’ Chirp Polynomial Interpolation ’ ); FigureSet (1 , ’ LTX ’ );
set ( gca , ’ Box ’ , ’ Off ’ ); yt = sin (2* pi *2* xt. ^2);
grid on ;
h = plot ( xt , yt , ’b ’ ,x ,y , ’ r. ’ );
axis ([0 1 -2 2]);
set (h , ’ MarkerSize ’ ,8);
set (h , ’ LineWidth ’ ,1 .2 );
AxisSet (8);
xlabel ( ’ Input x ’ );
ylabel ( ’ Output y ’ );
print - depsc I n t e r p o l a t i o n P o l y n o m i a l ;
title ( ’ Chirp Optimal Model ’ );
set ( gca , ’ Box ’ , ’ Off ’ );
% ================================================
grid on ;
% Cubic Spline Interpolation
% ================================================ axis ([0 1 -2 2]);
figure ; AxisSet (8);
FigureSet (1 , ’ LTX ’ ); print - depsc I n t e r p o l a t i o n O p t i m a l M o d e l ;
yh = spline (x ,y , xt );
h = plot ( xt , yh , ’b ’ ,x ,y , ’ r. ’ );
set (h , ’ MarkerSize ’ ,8);
set (h , ’ LineWidth ’ ,1 .2 );
xlabel ( ’ Input x ’ );
ylabel ( ’ Output y ’ );
title ( ’ Chirp Cubic Spline Interpolation ’ );
set ( gca , ’ Box ’ , ’ Off ’ );
J. McNames Portland State University ECE 4/557 Univariate Smoothing Ver. 1.25 7 J. McNames Portland State University ECE 4/557 Univariate Smoothing Ver. 1.25 8
Example 2: Nearest Neighbor Interpolation Example 2: MATLAB Code
1.5
0.5
Output y
−0.5
−1
−1.5
−2
0 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 1
Input x
J. McNames Portland State University ECE 4/557 Univariate Smoothing Ver. 1.25 9 J. McNames Portland State University ECE 4/557 Univariate Smoothing Ver. 1.25 10
1.5
0.5
Output y
−0.5
−1
−1.5
−2
0 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 1
Input x
J. McNames Portland State University ECE 4/557 Univariate Smoothing Ver. 1.25 11 J. McNames Portland State University ECE 4/557 Univariate Smoothing Ver. 1.25 12
Example 4: Cubic Spline Interpolation Example 4: MATLAB Code
1.5
0.5
Output y
−0.5
−1
−1.5
−2
0 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 1
Input x
J. McNames Portland State University ECE 4/557 Univariate Smoothing Ver. 1.25 13 J. McNames Portland State University ECE 4/557 Univariate Smoothing Ver. 1.25 14
J. McNames Portland State University ECE 4/557 Univariate Smoothing Ver. 1.25 15 J. McNames Portland State University ECE 4/557 Univariate Smoothing Ver. 1.25 16
Smoothing Assumptions and Statistical Model Example 5: Interpolation Optimal Model
y = g(x) + ε Chirp Optimal Model
2
• Generally we assume that the data was generated from the
statistical model above 1.5
Output y
– εi is identically distributed 0
• The two additional assumptions are usually made for the −0.5
smoothing problem
−1
– g(x) is continuous
– g(x) is smooth −1.5
−2
0 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 1
Input x
J. McNames Portland State University ECE 4/557 Univariate Smoothing Ver. 1.25 17 J. McNames Portland State University ECE 4/557 Univariate Smoothing Ver. 1.25 18
J. McNames Portland State University ECE 4/557 Univariate Smoothing Ver. 1.25 19 J. McNames Portland State University ECE 4/557 Univariate Smoothing Ver. 1.25 20
Bias-Variance Tradeoff Continued Example 6: Univariate Smoothing Data
2 2 Motorcycle Data Set
MSE(x) = (g(x) − E[ĝ(x)]) + E (ĝ(x) − E[ĝ(x)])
50
• Smooth models
– Less sensitive to the data
0
– Less variance
Output y
– Potentially high bias since they don’t fit the data well
−50
• Flexible models
– Sensitive to the data
– In the most extreme case, they interpolate the data −100
– High variance since they are sensitive to the data
– Low bias
5 10 15 20 25 30 35 40 45 50 55
Input x
J. McNames Portland State University ECE 4/557 Univariate Smoothing Ver. 1.25 21 J. McNames Portland State University ECE 4/557 Univariate Smoothing Ver. 1.25 22
% ================================================
% Linear
function [] = S m o o t h i n g P r o b l e m (); % ================================================
figure ;
A = load ( ’ MotorCycle.txt ’ ); FigureSet (1 ,4 .5 ,2 .8 );
x = A (: ,1); A = [ ones (N ,1) x ];
y = A (: ,2); w = pinv ( A )* y ;
yh = [ ones ( size ( xt )) xt ]* w ;
figure ; h = plot ( xt , yh , ’b ’ ,x ,y , ’ r. ’ );
FigureSet (1 , ’ LTX ’ ); set (h , ’ MarkerSize ’ ,8);
h = plot (x ,y , ’ r. ’ ); set (h , ’ LineWidth ’ ,1 .2 );
set (h , ’ MarkerSize ’ ,6); xlabel ( ’ Input x ’ );
xlabel ( ’ Input x ’ ); ylabel ( ’ Output y ’ );
ylabel ( ’ Output y ’ ); title ( ’ Chirp Linear Least Squares ’ );
title ( ’ Motorcycle Data Set ’ ); set ( gca , ’ Box ’ , ’ Off ’ );
set ( gca , ’ Box ’ , ’ Off ’ ); grid on ;
grid on ; ymin = min ( y );
ymin = min ( y ); ymax = max ( y );
ymax = max ( y ); yrng = ymax - ymin ;
yrng = ymax - ymin ; ymin = ymin - 0 .05 * yrng ;
ymin = ymin - 0 .05 * yrng ;
ymax = ymax + 0 .05 * yrng ;
ymax = ymax + 0 .05 * yrng ;
axis ([ min ( x ) max ( x ) ymin ymax ]);
axis ([ min ( x ) max ( x ) ymin ymax ]);
AxisSet (8);
AxisSet (8);
J. McNames Portland State University ECE 4/557 Univariate Smoothing Ver. 1.25 23 J. McNames Portland State University ECE 4/557 Univariate Smoothing Ver. 1.25 24
Polynomial Smoothing Example 7: Polynomial Smoothing
p−1
• We can fit a polynomial ĝ(x) = i=0 wi xi to the data using the Motorcycle Linear Regression
linear modeling methods
• Note that linear models are linear in the parameters wi
50
• They need not be linear in the inputs
• Alternatively, you can think of this as a linear model with p
different inputs where the ith input is given by xi = xi 0
Output y
• This model is smooth in the sense that all derivatives of ĝ(x) are
continuous −50
• This is one measure of model smoothness
Linear
• In general, this is a terrible smoother Quadratic
−100
– Terrible at extrapolation Cubic
4th Order
– The matrix inverse is often poorly conditioned and 5th Order
regularization is necessary −150
−10 0 10 20 30 40 50 60 70
– The user has to pick the order of the polynomial p − 1 Input x
J. McNames Portland State University ECE 4/557 Univariate Smoothing Ver. 1.25 25 J. McNames Portland State University ECE 4/557 Univariate Smoothing Ver. 1.25 26
J. McNames Portland State University ECE 4/557 Univariate Smoothing Ver. 1.25 27 J. McNames Portland State University ECE 4/557 Univariate Smoothing Ver. 1.25 28
Cubic Splines Functional Form Cubic Splines Smoothness Definition
3 • Out of all the functions that meet the above criteria, consider
ĝ(x) = wi (x)xi those that also minimize the approximate “curvature” of ĝ(x)
i=0 xmax 2 2
d ĝ(x)
C≡ dx
• Unlike polynomial regression, here the parameters wi (x) are also a xmin dx
function of x • These are piecewise cubics and are called cubic splines
• Consider a class of functions ĝ(x) that have the following • In the sense of satisfying the criteria listed above and minimizing
properties the curvature C, cubic splines are optimal
– Continuous • Even with all of these constraints, ĝ(x) is not uniquely specified
– Continuous 1st derivative • There are several cubic splines that meet the strict criteria and
– Continuous 2nd derivative have the same curvature
– Interpolates the data: ĝ(xi ) = yi • The most popular additional constraints are
ĝ(xmin ) = 0 ĝ(xmax ) = 0
• These are called natural cubic splines
J. McNames Portland State University ECE 4/557 Univariate Smoothing Ver. 1.25 29 J. McNames Portland State University ECE 4/557 Univariate Smoothing Ver. 1.25 30
x x
Let pk (x) be the polynomial between the point xk and xk+1 . We need
• Cubic splines are piecewise cubic 4 × (n + 1) constraints to have the problem well defined.
3
• This means ĝ(x) = i=0 wi (x)xi has different weights between Property Expression Constraints
each pair of points Interpolation ĝ(xi ) = yi n
Continuous pk (xk+1 ) = pk+1 (xk+1 ) n
• For the entire region between each pair of points, the weights are
Continuous Derivative pk (xk+1 ) = pk+1 (xk+1 ) n
fixed
Continuous 2nd Derivative pk (xk+1 ) = pk+1 (xk+1 ) n
• Since each polynomial is defined by 4 parameters wi (x)
Natural splines have 4 additional constraints
• We have n + 1 regions where n is the number of points in the
data set p0 (x1 ) = 0 p0 (x1 ) = 0
• Thus, we need at least 4 × (n + 1) constraints for each region to pn (xn ) = 0 pn (xn ) = 0
uniquely specifies the weights
J. McNames Portland State University ECE 4/557 Univariate Smoothing Ver. 1.25 31 J. McNames Portland State University ECE 4/557 Univariate Smoothing Ver. 1.25 32
Basis Splines Basis Splines Continued
• You could solve for the 4(n + 1) model coefficients by solving a • The output of our model can then be written as
set of 4(n + 1) linear equations n−1
• This is cumbersome and very inefficient mathematically ĝ(x) = wi b3,i (x)
i=−2
• An easier way is to use basis functions
• Mathematically, each basis function is defined recursively • Numerically, this can be solved much more quickly (the order is
proportional to n)
x − kj ki+j+1 − x
bi,j (x) = bi−1,j (x) + bi−1,j+1 (x) • Since the basis functions have finite support (i.e. finite span) the
ki+j − kj ki+j+1 − kj+1
equivalent A matrix is banded
• Basis splines also have the nice property that they sum to unity
n−1
bi,j (x) = 1 ∀x ∈ [k1 , kn ]
j=1−i
J. McNames Portland State University ECE 4/557 Univariate Smoothing Ver. 1.25 33 J. McNames Portland State University ECE 4/557 Univariate Smoothing Ver. 1.25 34
1 1
0.8 0.8
0.6 0.6
Output y
Output y
0.4 0.4
0.2 0.2
0 0
0 1 2 3 4 5 6 7 8 9 10 0 1 2 3 4 5 6 7 8 9 10
Input x Input x
J. McNames Portland State University ECE 4/557 Univariate Smoothing Ver. 1.25 35 J. McNames Portland State University ECE 4/557 Univariate Smoothing Ver. 1.25 36
Example 8: Basis Function 2 Example 8: Basis Function 3
1 1
0.8 0.8
0.6 0.6
Output y
Output y
0.4 0.4
0.2 0.2
0 0
0 1 2 3 4 5 6 7 8 9 10 0 1 2 3 4 5 6 7 8 9 10
Input x Input x
J. McNames Portland State University ECE 4/557 Univariate Smoothing Ver. 1.25 37 J. McNames Portland State University ECE 4/557 Univariate Smoothing Ver. 1.25 38
J. McNames Portland State University ECE 4/557 Univariate Smoothing Ver. 1.25 39 J. McNames Portland State University ECE 4/557 Univariate Smoothing Ver. 1.25 40
Smoothing Splines Continued Smoothing Splines Comments
n
+∞ n
+∞
(ĝ(x) ) dx (ĝ(x) ) dx
2 2 2 2
Eλ = (yi − ĝ(xi )) + λ Eλ = (yi − ĝ(xi )) + λ
i=1 −∞ i=1 −∞
J. McNames Portland State University ECE 4/557 Univariate Smoothing Ver. 1.25 41 J. McNames Portland State University ECE 4/557 Univariate Smoothing Ver. 1.25 42
Motorcycle Data Smoothing Spline Regression Motorcycle Data Smoothing Spline Regression α=0.0001
50 50
0 0
Output y
Output y
−50 −50
J. McNames Portland State University ECE 4/557 Univariate Smoothing Ver. 1.25 43 J. McNames Portland State University ECE 4/557 Univariate Smoothing Ver. 1.25 44
Example 10: Smoothing Spline Example 10: Smoothing Spline
Motorcycle Data Smoothing Spline Regression α=0.0010 Motorcycle Data Smoothing Spline Regression α=0.0100
50 50
0 0
Output y
Output y
−50 −50
−100 −100
−150 −150
−10 0 10 20 30 40 50 60 70 −10 0 10 20 30 40 50 60 70
Input x Input x
J. McNames Portland State University ECE 4/557 Univariate Smoothing Ver. 1.25 45 J. McNames Portland State University ECE 4/557 Univariate Smoothing Ver. 1.25 46
Motorcycle Data Smoothing Spline Regression α=0.2000 Motorcycle Data Smoothing Spline Regression α=0.5000
50 50
0 0
Output y
Output y
−50 −50
−100 −100
−150 −150
−10 0 10 20 30 40 50 60 70 −10 0 10 20 30 40 50 60 70
Input x Input x
J. McNames Portland State University ECE 4/557 Univariate Smoothing Ver. 1.25 47 J. McNames Portland State University ECE 4/557 Univariate Smoothing Ver. 1.25 48
Example 10: Smoothing Spline Example 10: Smoothing Spline
Motorcycle Data Smoothing Spline Regression α=0.9000 Motorcycle Data Smoothing Spline Regression α=0.9900
50 50
0 0
Output y
Output y
−50 −50
−100 −100
−150 −150
−10 0 10 20 30 40 50 60 70 −10 0 10 20 30 40 50 60 70
Input x Input x
J. McNames Portland State University ECE 4/557 Univariate Smoothing Ver. 1.25 49 J. McNames Portland State University ECE 4/557 Univariate Smoothing Ver. 1.25 50
J. McNames Portland State University ECE 4/557 Univariate Smoothing Ver. 1.25 51 J. McNames Portland State University ECE 4/557 Univariate Smoothing Ver. 1.25 52
Review of Bayes’ Rule Continuous Bayes’ Rule
• Bayes’ rule says that two discrete-valued random variables A and f (x, y) f (x|Y = y)f (y)
f (y|X = x) = =
B have the following relationship f (x) f (x)
Pr {A, B} Pr {A|B} Pr {B} • E[Y |X = x] is given by
Pr {B|A} = =
Pr {A} Pr {A} +∞
E[Y |X = x] = yf (y|X = x) dy
• Recall that earlier we found the the ĝ(x) that minimizes the MSE −∞
is given by
Ŷ = g ∗ (x) = E[Y |X = x] • In order to estimate these equations we need a means of
estimating the densities f (x) and f (x, y)
• For smoothing, we can use the continuous analog of Bayes’ rule to
• A popular method of estimating density is to add a series of
estimate E[Y |X = x]
“bumps” together
f (x, y) f (x|Y = y)f (y) • The bumps are called kernels and should have the following
f (y|X = x) = =
f (x) f (x) property
+∞
bσ (u) du = 1
−∞
J. McNames Portland State University ECE 4/557 Univariate Smoothing Ver. 1.25 53 J. McNames Portland State University ECE 4/557 Univariate Smoothing Ver. 1.25 54
Density p(x)
0.06
bσ (u) = b
σ σ 0.05
where it is easy to show that bσ (u) du = 1 for any value of σ 0.04
• Bumps shaped like a Gaussian are popular 0.03
u 2
b(u) = √1 e− 2 0.02
2π
0.01
• Typically the bumps have even symmetry:
0
b(u) = b(−u) = b(|u|) −10 0 10 20 30 40 50 60 70
Input x
J. McNames Portland State University ECE 4/557 Univariate Smoothing Ver. 1.25 55 J. McNames Portland State University ECE 4/557 Univariate Smoothing Ver. 1.25 56
Example 11: Density Estimation Example 11: Density Estimation
Motorcycle Data Density Estimation w= 0.2 Motorcycle Data Density Estimation w= 0.5
0.1 0.1
0.09 0.09
0.08 0.08
0.07 0.07
Density p(x)
Density p(x)
0.06 0.06
0.05 0.05
0.04 0.04
0.03 0.03
0.02 0.02
0.01 0.01
0 0
−10 0 10 20 30 40 50 60 70 −10 0 10 20 30 40 50 60 70
Input x Input x
J. McNames Portland State University ECE 4/557 Univariate Smoothing Ver. 1.25 57 J. McNames Portland State University ECE 4/557 Univariate Smoothing Ver. 1.25 58
Motorcycle Data Density Estimation w= 1.0 Motorcycle Data Density Estimation w= 5.0
0.1 0.1
0.09 0.09
0.08 0.08
0.07 0.07
Density p(x)
Density p(x)
0.06 0.06
0.05 0.05
0.04 0.04
0.03 0.03
0.02 0.02
0.01 0.01
0 0
−10 0 10 20 30 40 50 60 70 −10 0 10 20 30 40 50 60 70
Input x Input x
J. McNames Portland State University ECE 4/557 Univariate Smoothing Ver. 1.25 59 J. McNames Portland State University ECE 4/557 Univariate Smoothing Ver. 1.25 60
Example 11: MATLAB Code title ( st );
set ( gca , ’ Box ’ , ’ Off ’ );
grid on ;
axis ([ -10 70 0 0 .1 ]);
function [] = DensityEx (); AxisSet (8);
st = sprintf ( ’ print - depsc DensityEx %02 d ; ’ , round ( w *10));
close all ; eval ( st );
end ;
A = load ( ’ MotorCycle.txt ’ );
x = A (: ,1); % Raw values
y = A (: ,2); % Raw values
W = [0 .1 0 .2 0 .5 1 .0 5 .0 ];
for c1 = 1: length ( W ) ,
w = W ( c1 );
bs = zeros ( size ( xt )); % Bump sum
for c2 = 1: length ( x ) ,
bs = bs + exp ( -( xt - x ( c2 )) . ^2/(2* w. ^2))/ sqrt (2* pi * w ^2);
end ;
bs = bs / length ( x );
figure ;
FigureSet (1 , ’ LTX ’ );
h = plot (x , zeros ( size ( x )) , ’ k. ’ ,xt , bs );
set (h , ’ LineWidth ’ ,1 .5 );
xlabel ( ’ Input x ’ );
ylabel ( ’ Density p ( x ) ’ );
st = sprintf ( ’ Motorcycle Data Density Estimation w =%5 .1f ’ ,w );
J. McNames Portland State University ECE 4/557 Univariate Smoothing Ver. 1.25 61 J. McNames Portland State University ECE 4/557 Univariate Smoothing Ver. 1.25 62
1
• Although you can use this for large values of p, it is not
recommended 0.8
−50
• The estimate becomes inaccurate very quickly as the number of 0.6
dimensions grows 0.4
−100
• For one or two dimensions this is a pretty good technique
0.2
0 10 20 30 40 50
Input x
J. McNames Portland State University ECE 4/557 Univariate Smoothing Ver. 1.25 63 J. McNames Portland State University ECE 4/557 Univariate Smoothing Ver. 1.25 64
Example 12: 2D Density Estimation Example 12: 2D Density Estimation
Motorcycle Data Input−Output Density Estimation w= 0.10 Motorcycle Data Input−Output Density Estimation w= 0.20
0.8
0.35
50 0.7 50
0.3
0.6
0.25
0 0.5 0
Output y
Output y
0.2
0.4
−50 −50 0.15
0.3
0.1
0.2
−100 −100
0.1 0.05
0 10 20 30 40 50 0 10 20 30 40 50
Input x Input x
J. McNames Portland State University ECE 4/557 Univariate Smoothing Ver. 1.25 65 J. McNames Portland State University ECE 4/557 Univariate Smoothing Ver. 1.25 66
Motorcycle Data Input−Output Density Estimation w= 0.50 Motorcycle Data Input−Output Density Estimation w= 1.00
0.14
0.07
0.12
50 50 0.06
0.1
0.05
0 0
Output y
Output y
0.08
0.04
0.04 0.02
−100 −100
0.02 0.01
0 10 20 30 40 50 0 10 20 30 40 50
Input x Input x
J. McNames Portland State University ECE 4/557 Univariate Smoothing Ver. 1.25 67 J. McNames Portland State University ECE 4/557 Univariate Smoothing Ver. 1.25 68
Example 12: MATLAB Code for c2 = 1: length ( x ) ,
bx = exp ( -( xmt - x ( c2 )) . ^2/(2* w. ^2))/ sqrt (2* pi * w ^2);
by = exp ( -( ymt - y ( c2 )) . ^2/(2* w. ^2))/ sqrt (2* pi * w ^2);
bs = bs + bx. * by ;
function [] = DensityEx2D (); end ;
bs = bs / length ( x );
close all ; figure ;
FigureSet (1 , ’ LTX ’ );
A = load ( ’ MotorCycle.txt ’ ); h = imagesc ( xt , yt , bs );
xr = A (: ,1); % Raw values hold on ;
yr = A (: ,2); % Raw values h = plot ( xr , yr , ’ k. ’ ,xr , yr , ’ w. ’ );
xm = mean ( xr ); set ( h (1) , ’ MarkerSize ’ ,4);
ym = mean ( yr ); set ( h (2) , ’ MarkerSize ’ ,2);
xs = std ( xr ); hold off ;
ys = std ( yr ); set ( gca , ’ YDir ’ , ’ Normal ’ );
xlabel ( ’ Input x ’ );
x = ( xr - xm )/ xs ; ylabel ( ’ Output y ’ );
y = ( yr - ym )/ ys ; st = sprintf ( ’ Motorcycle Data Input - Output Density Estimation w =%5 .2f ’ ,w );
title ( st );
W = [0 .05 0 .1 0 .2 0 .5 1 .0 ]; set ( gca , ’ Box ’ , ’ Off ’ );
colorbar ;
xst = -2 .0 :0 .02 :2 .5 ; % X - test points AxisSet (8);
yst = -2 .5 :0 .02 :2 .5 ; % Y - test points
st = sprintf ( ’ print - depsc DensityEx2D %03 d ; ’ , round ( w *100));
[ xmt , ymt ] = meshgrid ( xst , yst ); % Grids of scaled test points
eval ( st );
end ;
xt = xst * xs + xm ; % Unscaled x - test values
yt = yst * ys + ym ; % Unscaled y - test values
for c1 = 1: length ( W ) ,
w = W ( c1 );
bs = zeros ( size ( xmt )); % Bump sum
J. McNames Portland State University ECE 4/557 Univariate Smoothing Ver. 1.25 69 J. McNames Portland State University ECE 4/557 Univariate Smoothing Ver. 1.25 70
• The following example shows the same data set without scaling 8
50
• Notice the oval-shaped bumps 7
6
0
Output y
−50 4
−100 2
−150
0 10 20 30 40 50 60
Input x
J. McNames Portland State University ECE 4/557 Univariate Smoothing Ver. 1.25 71 J. McNames Portland State University ECE 4/557 Univariate Smoothing Ver. 1.25 72
Example 13: 2D Density Estimation Example 13: 2D Density Estimation
−3 −4
Motorcycle Data No Scaling Density Estimation w= 1.00 x 10 Motorcycle Data No Scaling Density Estimation w= 5.00 x 10
4.5 8
50 4 50
7
3.5
6
0 3 0
Output y
Output y
5
2.5
4
−50 2 −50
3
1.5
−100 1 −100 2
0.5 1
−150 −150
0 10 20 30 40 50 60 0 10 20 30 40 50 60
Input x Input x
J. McNames Portland State University ECE 4/557 Univariate Smoothing Ver. 1.25 73 J. McNames Portland State University ECE 4/557 Univariate Smoothing Ver. 1.25 74
50 50 12
2.5
10
0 2 0
Output y
Output y
8
1.5
−50 −50 6
1
4
−100 −100
0.5
2
−150 −150
0 10 20 30 40 50 60 0 10 20 30 40 50 60
Input x Input x
J. McNames Portland State University ECE 4/557 Univariate Smoothing Ver. 1.25 75 J. McNames Portland State University ECE 4/557 Univariate Smoothing Ver. 1.25 76
Example 13: 2D Density Estimation Example 13: MATLAB Code
−5
Motorcycle Data No Scaling Density Estimation w= 50.00 x 10
function [] = DensityEx2D ();
% This is the same as DensityEx2D , except no scaling is used.
close all ;
4
50 A = load ( ’ MotorCycle.txt ’ );
x = A (: ,1); % Raw values
3.5 y = A (: ,2); % Raw values
W = [0 .5 1 .0 2 .0 5 .0 10 .0 20 .0 50 .0 ];
0 3
Output y
J. McNames Portland State University ECE 4/557 Univariate Smoothing Ver. 1.25 77 J. McNames Portland State University ECE 4/557 Univariate Smoothing Ver. 1.25 78
h = plot (x ,y , ’ k. ’ ,x ,y , ’ w. ’ );
set ( h (1) , ’ MarkerSize ’ ,4);
Kernel Smoothing Derivation
set ( h (2) , ’ MarkerSize ’ ,2);
hold off ; The following equations compose the Nadaraya-Watson estimator of
set ( gca , ’ YDir ’ , ’ Normal ’ );
xlabel ( ’ Input x ’ );
E[y|x]
ylabel ( ’ Output y ’ );
∞ ∞ ∞
st = sprintf ( ’ Motorcycle Data No Scaling Density Estimation w =%6 .2f ’ ,w );
f (x, y) y f (x, y) dy
title ( st );
E[y|x] = y f (y|x) dy = y dy = −∞
set ( gca , ’ Box ’ , ’ Off ’ );
colorbar ; −∞ −∞ f (x) f (x)
AxisSet (8);
st = sprintf ( ’ print - depsc DensityEx2Db %03 d ; ’ , round ( w *10)); The two densities can be estimated as follows
eval ( st );
n
1
end ;
J. McNames Portland State University ECE 4/557 Univariate Smoothing Ver. 1.25 79 J. McNames Portland State University ECE 4/557 Univariate Smoothing Ver. 1.25 80
Kernel Smoothing Derivation Continued (1) Kernel Smoothing Derivation Continued (2)
∞ n
y fˆ(x, y) dy
−∞ 1
E[y|x] ≈ fˆ(x) E[y|x] = bσ (|x − xi |) ×
fˆ(x) n i=1
∞ n
∞ ∞
ˆ 1 bσ (|y − yi |) dy + (y − yi ) bσ (|y − yi |) dy
f (x) E[y|x] ≈ y bσ (|x − xi |) · bσ (|y − yi |) dy yi
−∞ n i=1 −∞ −∞
n ∞
n ∞ 1
1 = bσ (|x − xi |) × yi + u bσ (|u|) dy
= bσ (|x − xi |) y bσ (|y − yi |) dy n i=1 −∞
n i=1 −∞
n
n ∞ 1
1 = yi bσ (|x − xi |)
= bσ (|x − xi |) (y − yi + yi ) bσ (|y − yi |) dy n i=1
n i=1 −∞ n
n
1
yi bσ (|x − xi |)
1 E[y|x] = n
i=1
n
= bσ (|x − xi |) × 1
bσ (|x − xi |)
n i=1 nn i=1
∞ yi bσ (|x − xi |)
∞ = i=1
n
yi bσ (|y − yi |) dy + (y − yi ) bσ (|y − yi |) dy i=1 bσ (|x − xi |)
−∞ −∞
J. McNames Portland State University ECE 4/557 Univariate Smoothing Ver. 1.25 81 J. McNames Portland State University ECE 4/557 Univariate Smoothing Ver. 1.25 82
• Here
∞ c is a constant chosen to meet the constraint 0.5 0.5 0.5
−∞
b(u) du = 1
• p(u) is the unit pulse:
0 0 0
1 |u| ≤ 1
p(u) =
0 Otherwise −2 0 2 −2 0 2 −2 0 2
J. McNames Portland State University ECE 4/557 Univariate Smoothing Ver. 1.25 83 J. McNames Portland State University ECE 4/557 Univariate Smoothing Ver. 1.25 84
Example 14: MATLAB Code h = plot ([ -5 5] ,[0 0] , ’k : ’ ,x , K (: , cnt ));
set ( h (2) , ’ LineWidth ’ ,1 .5 );
title ( char ( L ( cnt )));
box off ;
function [] = Kernels (); axis ([ min ( x ) max ( x ) -0 .3 1 .2 ]);
end ;
ST = 0 .01 ;
x = ( -2 .2 : ST :2 .2 ) ’; AxisSet (8);
u = abs ( x ); print - depsc Kernels ;
I = ( u≤1);
FigureSet (1 ,4 .5 ,2 .8 );
for cnt = 1:6 ,
subplot (2 ,3 , cnt );
J. McNames Portland State University ECE 4/557 Univariate Smoothing Ver. 1.25 85 J. McNames Portland State University ECE 4/557 Univariate Smoothing Ver. 1.25 86
J. McNames Portland State University ECE 4/557 Univariate Smoothing Ver. 1.25 87 J. McNames Portland State University ECE 4/557 Univariate Smoothing Ver. 1.25 88
Kernel Smoothing Bias-Variance Tradeoff Example 15: Kernel Smoothing
n
1
yi bσ (|x − xi |)
E[y|x] ≈ ĝ(x) = 1 i=1
n
n
Motorcycle Data Kernel Smoothing Epanechnikov Kernel w=0.1000
n i=1 bσ (|x − xi |)
50
• Thus, as with smoothing splines there is a single parameter that
controls the tradeoff of smoothness (high bias) for the ability of
the model to fit the data (high variance) 0
Output y
• Kernel smoothers have bounded outputs
min yi ≤ min ĝ(x) ≤ ĝ(x) ≤ max ĝ(x) ≤ max yi −50
i x x i
J. McNames Portland State University ECE 4/557 Univariate Smoothing Ver. 1.25 89 J. McNames Portland State University ECE 4/557 Univariate Smoothing Ver. 1.25 90
Motorcycle Data Kernel Smoothing Epanechnikov Kernel w=1.0000 Motorcycle Data Kernel Smoothing Epanechnikov Kernel w=2.0000
50 50
0 0
Output y
Output y
−50 −50
−100 −100
−150 −150
−10 0 10 20 30 40 50 60 70 −10 0 10 20 30 40 50 60 70
Input x Input x
J. McNames Portland State University ECE 4/557 Univariate Smoothing Ver. 1.25 91 J. McNames Portland State University ECE 4/557 Univariate Smoothing Ver. 1.25 92
Example 15: Kernel Smoothing Example 15: Kernel Smoothing
Motorcycle Data Kernel Smoothing Epanechnikov Kernel w=5.0000 Motorcycle Data Kernel Smoothing Epanechnikov Kernel w=10.0000
50 50
0 0
Output y
Output y
−50 −50
−100 −100
−150 −150
−10 0 10 20 30 40 50 60 70 −10 0 10 20 30 40 50 60 70
Input x Input x
J. McNames Portland State University ECE 4/557 Univariate Smoothing Ver. 1.25 93 J. McNames Portland State University ECE 4/557 Univariate Smoothing Ver. 1.25 94
Motorcycle Data Kernel Smoothing Gaussian Kernel w=0.1000 Motorcycle Data Kernel Smoothing Gaussian Kernel w=1.0000
50 50
0 0
Output y
Output y
−50 −50
−100 −100
−150 −150
−10 0 10 20 30 40 50 60 70 −10 0 10 20 30 40 50 60 70
Input x Input x
J. McNames Portland State University ECE 4/557 Univariate Smoothing Ver. 1.25 95 J. McNames Portland State University ECE 4/557 Univariate Smoothing Ver. 1.25 96
Example 15: Kernel Smoothing Example 15: Kernel Smoothing
Motorcycle Data Kernel Smoothing Gaussian Kernel w=2.0000 Motorcycle Data Kernel Smoothing Gaussian Kernel w=5.0000
50 50
0 0
Output y
Output y
−50 −50
−100 −100
−150 −150
−10 0 10 20 30 40 50 60 70 −10 0 10 20 30 40 50 60 70
Input x Input x
J. McNames Portland State University ECE 4/557 Univariate Smoothing Ver. 1.25 97 J. McNames Portland State University ECE 4/557 Univariate Smoothing Ver. 1.25 98
close all ;
50 A = load ( ’ MotorCycle.txt ’ );
x = A (: ,1); % Raw values
y = A (: ,2); % Raw values
% Epanechnikov Kernel
for cnt = 1: length ( W ) ,
w = W ( cnt );
−50 figure ;
FigureSet (1 , ’ LTX ’ );
yh = Kernel (x ,y , xt ,w ,2);
h = plot ( xt , yh , ’b ’ ,x ,y , ’ k. ’ );
set (h , ’ MarkerSize ’ ,8);
−100 set (h , ’ LineWidth ’ ,1 .2 );
xlabel ( ’ Input x ’ );
ylabel ( ’ Output y ’ );
st = sprintf ( ’ Motorcycle Data Kernel Smoothing Epanechnikov Kernel w =%6 .4f ’ ,w );
title ( st );
−150
set ( gca , ’ Box ’ , ’ Off ’ );
−10 0 10 20 30 40 50 60 70 grid on ;
Input x axis ([ -10 70 -150 90]);
J. McNames Portland State University ECE 4/557 Univariate Smoothing Ver. 1.25 99 J. McNames Portland State University ECE 4/557 Univariate Smoothing Ver. 1.25 100
AxisSet (8);
st = sprintf ( ’ print - depsc E K e r n e l S m o o t h i n g E x %03 d ; ’ , round ( w *10));
Local Averaging
eval ( st );
n
end ;
ĝ(x) = wi (x) yi
% Gaussian Kernel
for cnt = 1: length ( W ) , i=1
w = W ( cnt );
figure ;
FigureSet (1 , ’ LTX ’ );
yh = Kernel (x ,y , xt ,w ,1);
• We saw that kernel smoothers can be viewed as a weighted
h = plot ( xt , yh , ’b ’ ,x ,y , ’ k. ’ ); average
set (h , ’ MarkerSize ’ ,8);
set (h , ’ LineWidth ’ ,1 .2 );
xlabel ( ’ Input x ’ );
• Instead, we could take a local average of the k-nearest neighbors
ylabel ( ’ Output y ’ ); of x
k
1
st = sprintf ( ’ Motorcycle Data Kernel Smoothing Gaussian Kernel w =%6 .4f ’ ,w );
title ( st );
set ( gca , ’ Box ’ , ’ Off ’ ); ĝ(x) = yc(i)
grid on ; k i=1
axis ([ -10 70 -150 90]);
AxisSet (8);
st = sprintf ( ’ print - depsc G K e r n e l S m o o t h i n g E x %03 d ; ’ , round ( w *10)); where c(i) is the data set index of the ith nearest point
eval ( st );
end ; • For this type of model, k controls the smoothness
J. McNames Portland State University ECE 4/557 Univariate Smoothing Ver. 1.25 101 J. McNames Portland State University ECE 4/557 Univariate Smoothing Ver. 1.25 102
D = load ( ’ Motorcycle.txt ’ );
x = D (: ,1);
y = D (: ,2);
50
[x , is ] = sort ( x );
Head Acceleration (g)
y = y ( is );
d = (x - q ) . ^2;
[ ds , is ] = sort ( d );
−50 xs = x ( is );
ys = y ( is );
xn = xs (1: k );
yn = ys (1: k );
−100
[ xsmin , imin ] = min ( xs (1: k ));
[ xsmax , imax ] = max ( xs (1: k ));
imin = is ( imin );
−150 imax = is ( imax );
J. McNames Portland State University ECE 4/557 Univariate Smoothing Ver. 1.25 103 J. McNames Portland State University ECE 4/557 Univariate Smoothing Ver. 1.25 104
rg = max ( y ) - min ( y ); h = plot ( xav , yav , ’r - ’ );
ymin = min ( y ) -0 .1 * rg ; set (h , ’ LineWidth ’ ,1 .5 );
ymax = max ( y )+0 .1 * rg ; % h = plot ( xll , yll , ’ b : ’);
h = plot ( q *[1 1] , [ ymin ymax ] , ’b - - ’ );
xbox = [ xll xul xul xll ]; set (h , ’ LineWidth ’ ,1 .5 );
ybox = [ ymin ymin ymax ymax ]; hold off ;
rg = max ( y ) - min ( y );
yav = mean ( yn )*[1 1]; axis ([ min ( x ) max ( x ) ymin ymax ]);
xav = [ xll xul ]; xlabel ( ’ Time ( ms ) ’ );
ylabel ( ’ Head Acceleration ( g ) ’ );
A = [ xn ones (k ,1)]; title ( ’ Motorcycle Data Set ’ );
b = yn ;
set ( gca , ’ Layer ’ , ’ top ’ );
v = pinv ( A )* b ;
set ( gca , ’ Box ’ , ’ off ’ );
xl1 = 0;
yl1 = [ xl1 1]* v ;
AxisSet (8);
xl2 = 1 .5 ;
print - depsc L o c a l A v e r a g e C o n c e p t ;
yl2 = [ xl2 1]* v ;
xll = [ xl1 xl2 ];
yll = [ yl1 yl2 ];
figure ;
FigureSet (1 , ’ LTX ’ );
hold on ;
h = plot (x ,y , ’ k. ’ );
set (h , ’ MarkerSize ’ ,8);
J. McNames Portland State University ECE 4/557 Univariate Smoothing Ver. 1.25 105 J. McNames Portland State University ECE 4/557 Univariate Smoothing Ver. 1.25 106
−150
−10 0 10 20 30 40 50 60 70
Time (ms)
J. McNames Portland State University ECE 4/557 Univariate Smoothing Ver. 1.25 107 J. McNames Portland State University ECE 4/557 Univariate Smoothing Ver. 1.25 108
Example 16: Local Averaging Example 16: Local Averaging
50 50
Head Acceleration (g)
−50 −50
−100 −100
−150 −150
−10 0 10 20 30 40 50 60 70 −10 0 10 20 30 40 50 60 70
Time (ms) Time (ms)
J. McNames Portland State University ECE 4/557 Univariate Smoothing Ver. 1.25 109 J. McNames Portland State University ECE 4/557 Univariate Smoothing Ver. 1.25 110
50 50
Head Acceleration (g)
−50 −50
−100 −100
−150 −150
−10 0 10 20 30 40 50 60 70 −10 0 10 20 30 40 50 60 70
Time (ms) Time (ms)
J. McNames Portland State University ECE 4/557 Univariate Smoothing Ver. 1.25 111 J. McNames Portland State University ECE 4/557 Univariate Smoothing Ver. 1.25 112
Example 16: MATLAB Code yh ( cnt ) = mean ( yn );
end ;
figure ;
function [] = LocalAverageFit (); FigureSet (1 , ’ LTX ’ );
h = plot (x ,y , ’ k. ’ );
close all ; set (h , ’ MarkerSize ’ ,8);
hold on ;
D = load ( ’ Motorcycle.txt ’ ); h = stairs ( xt , yh , ’b ’ );
x = D (: ,1); set (h , ’ LineWidth ’ ,1 .2 );
y = D (: ,2); hold off ;
axis ([ -10 70 -150 90]);
[x , is ] = sort ( x ); xlabel ( ’ Time ( ms ) ’ );
y = y ( is ); ylabel ( ’ Head Acceleration ( g ) ’ );
st = sprintf ( ’ Motorcycle Data Set k =% d ’ ,k );
xt = ( -10:0 .1 :70) ’; title ( st );
yh = zeros ( size ( xt )); set ( gca , ’ Layer ’ , ’ top ’ );
set ( gca , ’ Box ’ , ’ off ’ );
K = [2 5 10 20 50]; AxisSet (8);
for c = 1: length ( K ) , st = sprintf ( ’ print - depsc LocalAverageEx %02 d ; ’ ,k );
k = K ( c ); eval ( st );
end ;
for cnt = 1: length ( xt ) ,
d = (x - xt ( cnt )) . ^2;
[ ds , is ] = sort ( d );
xs = x ( is );
ys = y ( is );
xn = xs (1: k );
yn = ys (1: k );
J. McNames Portland State University ECE 4/557 Univariate Smoothing Ver. 1.25 113 J. McNames Portland State University ECE 4/557 Univariate Smoothing Ver. 1.25 114
J. McNames Portland State University ECE 4/557 Univariate Smoothing Ver. 1.25 115 J. McNames Portland State University ECE 4/557 Univariate Smoothing Ver. 1.25 116
Example 17: Weighted Averaging Example 17: Weighted Averaging
50 50
Head Acceleration (g)
−50 −50
−100 −100
−150 −150
−10 0 10 20 30 40 50 60 70 −10 0 10 20 30 40 50 60 70
Time (ms) Time (ms)
J. McNames Portland State University ECE 4/557 Univariate Smoothing Ver. 1.25 117 J. McNames Portland State University ECE 4/557 Univariate Smoothing Ver. 1.25 118
50 50
Head Acceleration (g)
−50 −50
−100 −100
−150 −150
−10 0 10 20 30 40 50 60 70 −10 0 10 20 30 40 50 60 70
Time (ms) Time (ms)
J. McNames Portland State University ECE 4/557 Univariate Smoothing Ver. 1.25 119 J. McNames Portland State University ECE 4/557 Univariate Smoothing Ver. 1.25 120
Example 17: Weighted Averaging Example 17: MATLAB Code
xarg = x ;
yarg = y ;
50 x = unique ( xarg );
y = zeros ( size ( x ));
for cnt = 1: length ( x ) ,
Head Acceleration (g)
dn = ds (1: k );
dmax = ds ( k +1);
−150
xn = xs (1: k );
−10 0 10 20 30 40 50 60 70 yn = ys (1: k );
Time (ms)
J. McNames Portland State University ECE 4/557 Univariate Smoothing Ver. 1.25 121 J. McNames Portland State University ECE 4/557 Univariate Smoothing Ver. 1.25 122
w = (1 -( dn / dmax )) . ^2;
yt ( cnt ) = sum ( w. * yn )/ sum ( w );
Weighted Local Averaging Comments
end ; k
i=1 bk (|x − xc(i) |) yc(i)
ĝ(x) = k
i=1 bk (|x − xc(i) |)
J. McNames Portland State University ECE 4/557 Univariate Smoothing Ver. 1.25 123 J. McNames Portland State University ECE 4/557 Univariate Smoothing Ver. 1.25 124
Local Model Optimality Local Model Optimality Continued
It can be shown that for fixed weighting functions, w(x), both kernel
n
smoothers and weighted local averaging models minimize the weighted 1 2
ASE ≡ (yi − ĝ(xi )) b(|x − xi |)
average squared error n i=1
n n
1 2 yi b(|x − xi |)
ASE ≡ (yi − ĝ(xi )) b(|x − xi |) ĝ ∗ (x) = i=1
n
i=1 b(|x − xi |)
n i=1
n
dASE
∝ (yi − ĝ(x)) b(|x − xi |) = 0
dĝ(xi ) • Thus, we have an alternative derivation of kernel smoothers and
i=1
n n weighted local averaging models
0 = yi b(|x − xi |) − ĝ(x)b(|x − xi |) • They are the models that minimize the weighted ASE
i=1 i=1
n n • The only difference between kernel smoothers, local averaging
= yi b(|x − xi |) − ĝ(x) b(|x − xi |) models, and weighted local averaging models are the weighting
i=1 i=1 functions, b(·)
n
yi b(|x − xi |)
ĝ(x) = i=1
n
i=1 b(|x − xi |)
J. McNames Portland State University ECE 4/557 Univariate Smoothing Ver. 1.25 125 J. McNames Portland State University ECE 4/557 Univariate Smoothing Ver. 1.25 126
J. McNames Portland State University ECE 4/557 Univariate Smoothing Ver. 1.25 127 J. McNames Portland State University ECE 4/557 Univariate Smoothing Ver. 1.25 128
Bias-Variance Tradeoff Continued Model Selection
• How do we estimate the prediction error with only one data set?
Prediction Error
Bias • The ASE won’t work: monotonically decreases as the smoothness
Variance decreases
Smoothness • All of our smoothers can be written as
ĝ(x) = ŷ = H(x)y
• Our goal is to minimize the prediction error
• How do we choose the best smoothing parameter? for a given input vector x.
• All of the methods we discussed had a single parameter that • This is very similar to the hat matrix of linear models, except now
controlled smoothness the H matrix is a function of x
– Smoothing splines had a smoothness penalty parameter λ • The equivalent degrees of freedom can be estimated by
– Kernel methods had the bump width σ T
J. McNames Portland State University ECE 4/557 Univariate Smoothing Ver. 1.25 129 J. McNames Portland State University ECE 4/557 Univariate Smoothing Ver. 1.25 130
J. McNames Portland State University ECE 4/557 Univariate Smoothing Ver. 1.25 131 J. McNames Portland State University ECE 4/557 Univariate Smoothing Ver. 1.25 132
Example: Weighted Averaging CVE Example: Weighted Averaging CVE
900
50
800
600 −50
500 −100
400
−150
0 5 10 15 20 25 30 −10 0 10 20 30 40 50 60 70
Number of Neighbors (k) Time (ms)
J. McNames Portland State University ECE 4/557 Univariate Smoothing Ver. 1.25 133 J. McNames Portland State University ECE 4/557 Univariate Smoothing Ver. 1.25 134
J. McNames Portland State University ECE 4/557 Univariate Smoothing Ver. 1.25 135 J. McNames Portland State University ECE 4/557 Univariate Smoothing Ver. 1.25 136
Example 18: Local Linear Model Example 18: Local Linear Model
50 50
Head Acceleration (g)
−50 −50
−100 −100
−150 −150
−10 0 10 20 30 40 50 60 70 −10 0 10 20 30 40 50 60 70
Time (ms) Time (ms)
J. McNames Portland State University ECE 4/557 Univariate Smoothing Ver. 1.25 137 J. McNames Portland State University ECE 4/557 Univariate Smoothing Ver. 1.25 138
Example 18: Local Linear Model Example 18: Local Linear Model
50 50
Head Acceleration (g)
−50 −50
−100 −100
−150 −150
−10 0 10 20 30 40 50 60 70 −10 0 10 20 30 40 50 60 70
Time (ms) Time (ms)
J. McNames Portland State University ECE 4/557 Univariate Smoothing Ver. 1.25 139 J. McNames Portland State University ECE 4/557 Univariate Smoothing Ver. 1.25 140
Example 18: Local Linear Model Example 18: Local Linear Model
50 50
Head Acceleration (g)
−50 −50
−100 −100
−150 −150
−10 0 10 20 30 40 50 60 70 −10 0 10 20 30 40 50 60 70
Time (ms) Time (ms)
J. McNames Portland State University ECE 4/557 Univariate Smoothing Ver. 1.25 141 J. McNames Portland State University ECE 4/557 Univariate Smoothing Ver. 1.25 142
close all ;
50 D = load ( ’ Motorcycle.txt ’ );
x = D (: ,1);
y = D (: ,2);
Head Acceleration (g)
[x , is ] = sort ( x );
0 y = y ( is );
k = [2 5 10 20 30 50 93];
−50 for cnt = 1: length ( k ) ,
yh = LocalLinear (x ,y , xt , k ( cnt ));
figure ;
FigureSet (1 , ’ LTX ’ );
h = plot (x ,y , ’ k. ’ ,xt , yh , ’b ’ );
−100 set ( h (1) , ’ MarkerSize ’ ,8);
set ( h (2) , ’ LineWidth ’ ,1 .2 );
axis ([ -10 70 -150 90]);
xlabel ( ’ Time ( ms ) ’ );
ylabel ( ’ Head Acceleration ( g ) ’ );
−150
st = sprintf ( ’ Motorcycle Data Set k =% d ’ ,k ( cnt ));
−10 0 10 20 30 40 50 60 70 title ( st );
Time (ms) set ( gca , ’ Layer ’ , ’ top ’ );
J. McNames Portland State University ECE 4/557 Univariate Smoothing Ver. 1.25 143 J. McNames Portland State University ECE 4/557 Univariate Smoothing Ver. 1.25 144
set ( gca , ’ Box ’ , ’ off ’ );
AxisSet (8);
Univariate Smoothing Summary
st = sprintf ( ’ print - depsc LocalLinearEx %02 d ; ’ ,k ( cnt ));
eval ( st ); • We discussed four methods of interpolation
– Linear Interpolation
end ;
J. McNames Portland State University ECE 4/557 Univariate Smoothing Ver. 1.25 145 J. McNames Portland State University ECE 4/557 Univariate Smoothing Ver. 1.25 146
J. McNames Portland State University ECE 4/557 Univariate Smoothing Ver. 1.25 147