Download as pdf or txt
Download as pdf or txt
You are on page 1of 34

11/14/2007

Interpolation
Given a set of discrete values (x0,y0), (x1,y1), …… (xn,yn),
find a function that matches these values exactly. The
resulting function can then be used to estimate the value of
‘y’ at a value of ‘x’ that is not given.

Interpolants
Polynomials are the most common
choice of interpolants because they
are easy to:

Evaluate
Differentiate, and
Integrate.

http://numericalmethods.eng.usf.edu 1
11/14/2007

Direct Method
Given ‘n+1’ data points (x0,y0), (x1,y1),………….. (xn,yn),
pass a polynomial of order ‘n’ through the data as given
below:

y = a0 + a1 x + .................... + an x n .
where a0, a1,………………. an are real constants.
„ Set up ‘n+1’ equations to find ‘n+1’ constants.

„ To find the value ‘y’ at a given value of ‘x’, simply


substitute the value of ‘x’ in the above polynomial.

Linear Interpolation
The upward velocity of a rocket is given as a
function of time. Find the velocity at t=16
seconds using the direct method for linear
interpolation.
t v(t)

s m/s

0 0

10 227.04

15 362.78

20 517.35

22.5 602.97

30 901.67
Velocity vs. time data for the rocket example
Velocity as a function of time

http://numericalmethods.eng.usf.edu 2
11/14/2007

Linear Interpolation

v(t ) = a0 + a1t
550
517.35

v(15) = a 0 + a1 (15) = 362.78


500

ys

v(20) = a 0 + a1 (20) = 517.35


f ( range)
450
(
f x desired )

400

Solving the above two equations gives,


a0 = −100.91 a1 = 30.913 362.78 350
10
x s − 10
12 14 16 18
x s , range, x desired
20 22 24
x s + 10
0 1

Hence
v(t ) = −100.91 + 30.913t , 15 ≤ t ≤ 20.
v(16) = −100.91 + 30.913(16) = 393.7m / s

t v(t) Find the velocity at t=16 seconds using the


s m/s direct method for quadratic interpolation.
0 0
v (t ) = a0 + a1t + a 2t 2
10 227.04

v(10) = a0 + a1 (10) + a2 (10) = 227.04


2
15 362.78

v(15) = a0 + a1 (15) + a2 (15) = 362.78


20 517.35
2
22.5 602.97

30 901.67

v(20 ) = a0 + a1 (20 ) + a2 (20 ) = 517.35


2

550
517.35

a0 = 12.001 a1 = 17.740 a2 = 0.37637


500

450

ys
400
f ( range)

( )
v(16 ) = 12.001 + 17.740(16 ) + 0.37637 (16 )
f x desired 350
2
300

250 = 392.19m / s
227.04 200
10 12 14 16 18 20
10 x s , range, x desired 20

http://numericalmethods.eng.usf.edu 3
11/14/2007

t v(t)
Cubic Interpolation
s m/s Find the velocity at t=16 seconds using the
0 0 direct method for cubic interpolation.

v(t ) = a0 + a1t + a2t 2 + a3t 3


10 227.04

15 362.78

20 517.35

22.5 602.97
v(t ) = −4.3810 + 21.289t + 0.13064t 2 + 0.0054606t 3 , 10 ≤ t ≤ 22.5

v (16 ) = − 4 .3810 + 21 .289 (16 ) + 0 .13064 (16 ) + 0 .0054606 (16 ) = 392 .06 m / s
30 901.67 2 3

700
602.97

600

ys 500

f ( range)

(
f x desired )
400

300

227.04 200
10 12 14 16 18 20 22 24
10 x s , range, x desired 22.5

v(t ) = −4.3810 + 21.289t + 0.13064t 2 + 0.0054606t 3 , 10 ≤ t ≤ 22.5

Find the distance covered by the rocket from t=11s to t=16s ?


16
s (16 ) − s (11) = ∫ v(t )dt
11

∫ (− 4.3810 + 21.289t + 0.13065t )


16
≈ 2
+ 0.0054606t 3 dt = 1605 m
11

Find the acceleration of the rocket at t=16s

a (t ) =
d
dt
v(t ) =
d
dt
(
− 4.3810 + 21.289t + 0.13064t 2 + 0.0054606t 3 )
= 21.289 + 0.26130t + 0.016382t 2 , 10 ≤ t ≤ 22.5

a (16 ) = 21.289 + 0.26130 (16 ) + 0.016382 (16 ) = 29.664 m / s 2


2

http://numericalmethods.eng.usf.edu 4
11/14/2007

Lagrangian Interpolation

Consider a line segment that goes through (x0,y0) and (x1,y1)

where m is the slope

http://numericalmethods.eng.usf.edu 5
11/14/2007

General form of Lagrange polynomials

http://numericalmethods.eng.usf.edu 6
11/14/2007

Compare with Taylor approximation

http://numericalmethods.eng.usf.edu 7
11/14/2007

For N=1

Error bounds for equally spaced nodes

Lagrange polynomial approximation for on interval [0,2]

Interpolation 0≤x≤2
Extrapolation x<0,x>2

http://numericalmethods.eng.usf.edu 8
11/14/2007

Newton’s Divided Difference


Polynomial Method

It is sometimes useful to find several approximating


polynomials P1(x), P2(x), . . . ,PN (x) and then choose the
one that suits our needs.
If the Lagrange polynomials are used, there is no
constructive relationship between PN−1(x) and PN (x). Each
polynomial has to be constructed individually, and the
work required to compute the higher-degree polynomials
involves many computations.

http://numericalmethods.eng.usf.edu 9
11/14/2007

Newton polynomials have the recursive pattern:

Newton’s Divided Difference Method

Linear interpolation: Given ( x0 , y0 ), ( x1 , y1 ), pass a


linear interpolant through the data
f1 ( x) = b0 + b1 ( x − x0 )

where
b0 = f ( x 0 )
f ( x1 ) − f ( x 0 )
b1 =
x1 − x 0

http://numericalmethods.eng.usf.edu 10
11/14/2007

Quadratic Interpolation
Given ( x0 , y 0 ), ( x1 , y1 ), and ( x 2 , y 2 ), fit a quadratic interpolant through the data.
f 2 ( x) = b0 + b1 ( x − x0 ) + b2 ( x − x 0 )( x − x1 )

b0 = f ( x0 )

f ( x1 ) − f ( x0 )
b1 =
x1 − x0

f ( x 2 ) − f ( x1 ) f ( x1 ) − f ( x0 )

x 2 − x1 x1 − x0
b2 =
x2 − x0

General Form
f 2 ( x) = b0 + b1 ( x − x0 ) + b2 ( x − x0 )( x − x1 )
where
b0 = f [ x0 ] = f ( x0 )
f ( x1 ) − f ( x 0 )
b1 = f [ x1 , x0 ] =
x1 − x0
f ( x 2 ) − f ( x1 ) f ( x1 ) − f ( x0 )

f [ x 2 , x1 ] − f [ x1 , x 0 ] x 2 − x1 x1 − x0
b2 = f [ x 2 , x1 , x 0 ] = =
x 2 − x0 x 2 − x0
Rewriting
f 2 ( x) = f [ x0 ] + f [ x1 , x0 ]( x − x0 ) + f [ x 2 , x1 , x0 ]( x − x0 )( x − x1 )

http://numericalmethods.eng.usf.edu 11
11/14/2007

General Form
Given (n + 1) data points, ( x0 , y 0 ), ( x1 , y1 ),......, ( x n −1 , y n −1 ), ( x n , y n ) as
f n ( x) = b0 + b1 ( x − x0 ) + .... + bn ( x − x0 )( x − x1 )...( x − x n −1 )
where
b0 = f [ x0 ]
b1 = f [ x1 , x0 ]
b2 = f [ x 2 , x1 , x0 ]
M
bn −1 = f [ x n −1 , x n − 2 ,...., x0 ]
bn = f [ x n , x n −1 ,...., x0 ]

General form
The third order polynomial, given ( x0 , y 0 ), ( x1 , y1 ), ( x 2 , y 2 ), and ( x3 , y 3 ), is

f 3 ( x) = f [ x0 ] + f [ x1 , x0 ]( x − x0 ) + f [ x 2 , x1 , x0 ]( x − x0 )( x − x1 )
+ f [ x3 , x 2 , x1 , x0 ]( x − x0 )( x − x1 )( x − x 2 )
b0
x0 f ( x0 ) b1
f [ x1 , x0 ] b2
x1 f ( x1 ) f [ x 2 , x1 , x0 ] b3
f [ x 2 , x1 ] f [ x3 , x 2 , x1 , x0 ]
x2 f ( x2 ) f [ x3 , x 2 , x1 ]
f [ x3 , x 2 ]
x3 f ( x3 )

http://numericalmethods.eng.usf.edu 12
11/14/2007

The upward velocity of a rocket is given as a function of time.


Find the velocity at t=16 seconds using the Newton Divided
Difference method for cubic interpolation.

v(t ) = b0 + b1 (t − t 0 ) + b2 (t − t 0 )(t − t1 ) + b3 (t − t 0 )(t − t1 )(t − t 2 )

t v(t)

s m/s

0 0

10 227.04

15 362.78

20 517.35

22.5 602.97

30 901.67

Table 1: Velocity as a
function of time

t v(t)
s m/s
0 0
10 227.04
15 362.78
20 517.35
22.5 602.97
30 901.67

tk f [tk ] f[,] f[,,] f[,,,]


b0
t0 = 10 227.04 b1
27.148 b2
t1 = 15, 362.78 0.37660 b3
30.914 5.4347 x10 −3
t 2 = 20, 517.35 0.44453
34.248
t3 = 22.5, 602.97

b0 = 227.04; b1 = 27.148; b2 = 0.37660; b3 = 5.4347*10-3

http://numericalmethods.eng.usf.edu 13
11/14/2007

b0 = 227.04; b1 = 27.148; b2 = 0.37660; b3 = 5.4347*10-3

http://numericalmethods.eng.usf.edu 14
11/14/2007

Spline Interpolation Method

Pittfalls of polynomial approximation

1
f ( x) =
1 + 25 x 2
Table : Six equidistantly spaced points in [-1, 1]
1
x y=
1 + 25 x 2

-1.0 0.038461

-0.6 0.1

-0.2 0.5

0.2 0.5

0.6 0.1

1.0 0.038461 Figure : 5th order polynomial vs. exact function

http://numericalmethods.eng.usf.edu 1
11/14/2007

Pittfalls of polynomial approximation

1.2

0.8

0.4
y

0
-1 -0.5 0 0.5 1
-0.4

-0.8
x
19th Order Polynomial f (x) 5th Order Polynomial

Figure : Higher order polynomial interpolation is a bad idea

Linear spline interpolation


Applying linear Lagrange interpolation polynomial piecewise,
that is between two successive points

We get N linear polynomials for N+1 points

Note: the interpolation function is continuos over the domain,


while the slope is discontinuous.

http://numericalmethods.eng.usf.edu 2
11/14/2007

Quadratic spline interpolation


We can also use quadratic Lagrange interpolation polynomial piecewise
using three consecutive points

Note: the interpolation function and its slope are continuos


over the domain, while the curvature is discontinuous.

Most popular splines are the Cubic Splines

Given N+1 points construct N cubic polynomials Sk for each


interval [xk,xk+1] so that the resulting cubic spline S(x), its first
derivative S’(x), and its second derivative S’’(x) are
continuous over [x0,xN].

http://numericalmethods.eng.usf.edu 3
11/14/2007

Construction of Cubic Splines

Construction of Cubic Splines (cont.)

http://numericalmethods.eng.usf.edu 4
11/14/2007

Construction of Cubic Splines (cont.)

Construction of Cubic Splines (cont.)

http://numericalmethods.eng.usf.edu 5
11/14/2007

Linear Regression
(Linear Curve Fitting)

http://numericalmethods.eng.usf.edu 6
11/14/2007

What is Regression?
Given n data points ( x1, y1), ( x 2, y 2), ... , ( xn, yn)
best fit y = f (x ) to the data. The best fit is generally based on
minimizing the sum of the square of the residuals, Sr .

Residual at a point is ( x n , yn )
εi = yi − f ( xi)

y = f (x )
Sum of the square of the residuals
n
Sr = ∑ ( yi − f ( xi )) 2
( x1, y1)
i =1
Figure. Basic model for regression

Linear Regression-Criterion#1
Given n data points ( x1, y1), ( x 2, y 2), ... , ( xn, yn) best fit y = a 0 + a 1 x
to the data.
y

xi , yi

ε i = y i − a 0 − a1 x i x ,y
n n

x ,y
2 2
x3 , y3

x ,y
εi = yi − a0 − a1xi
1 1

x
Figure. Linear regression of y vs. x data showing residuals at a typical point, xi .
n
Does minimizing ∑ε
i =1
i work as a criterion, where εi = yi − ( a 0 + a1 xi )

http://numericalmethods.eng.usf.edu 7
11/14/2007

Example for Criterion#1

Example: Given the data points (2,4), (3,6), (2,6) and (3,8), best fit
the data to a straight line using Criterion#1

Table. Data Points 10

8
x y
6
2.0 4.0
y

3.0 6.0 4

2
2.0 6.0
3.0 8.0 0
0 1 2 3 4
x

Figure. Data points for y vs. x data.

Linear Regression-Criterion#1
Using y=4x-4 as the regression curve

Table. Residuals at each point for


regression model y = 4x – 4. 10

x y ypredicted ε = y - ypredicted 8

2.0 4.0 4.0 0.0 6


y

3.0 6.0 8.0 -2.0 4

2.0 6.0 4.0 2.0 2


3.0 8.0 8.0 0.0
0
4

∑ε
0 1 2 3 4
i =0
i =1 x

Figure. Regression curve for y=4x-4, y vs. x data

http://numericalmethods.eng.usf.edu 8
11/14/2007

Linear Regression-Criterion#1
Using y=6 as a regression curve

Table. Residuals at each point for y=6


10
x y ypredicted ε = y - ypredicted
8
2.0 4.0 6.0 -2.0
6
3.0 6.0 6.0 0.0

y
4
2.0 6.0 6.0 0.0
2
3.0 8.0 6.0 2.0
4 0
∑ε
i =1
i
=0 0 1 2 3 4
x

Figure. Regression curve for y=6, y vs. x data

Linear Regression – Criterion #1

∑ε
i =1
i = 0 for both regression models of y=4x-4 and y=6.

The sum of the residuals is as small as possible, that is zero,


but the regression model is not unique.

Hence the above criterion of minimizing the sum of the


residuals is a bad criterion.

http://numericalmethods.eng.usf.edu 9
11/14/2007

Linear Regression-Criterion#2
n

Will minimizing ∑ε
i =1
i work any better?

xi , yi

ε i = y i − a 0 − a1 x i x ,y
n n

x2 , y2
x3 , y3

x1 , y 1 εi = yi − a0 − a1xi

x
Figure. Linear regression of y vs. x data showing residuals at a typical point, xi .

Linear Regression-Criterion 2
Using y=4x-4 as the regression curve

Table. The absolute residuals


employing the y=4x-4 regression 10
model 8

x y ypredicted |ε| = |y - ypredicted| 6


y

2.0 4.0 4.0 0.0 4


3.0 6.0 8.0 2.0
2
2.0 6.0 4.0 2.0
0
3.0 8.0 8.0 0.0 0 1 2 3 4
4 x
∑ε
i =1
i
=4
Figure. Regression curve for y=4x-4, y vs. x data

http://numericalmethods.eng.usf.edu 10
11/14/2007

Linear Regression-Criterion#2
Using y=6 as a regression curve

Table. Absolute residuals employing


the y=6 model 10

x y ypredicted |ε| = |y – ypredicted| 8


6
2.0 4.0 6.0 2.0

y
4
3.0 6.0 6.0 0.0
2
2.0 6.0 6.0 0.0
0
3.0 8.0 6.0 2.0 0 1 2 3 4
4 x
∑ε
i =1
i =4
Figure. Regression curve for y=6, y vs. x data

Linear Regression-Criterion#2
4

∑ε
i =1
i = 4 for both regression models of y=4x-4 and y=6.

The sum of the errors has been made as small as possible, that
is 4, but the regression model is not unique.
Hence the above criterion of minimizing the sum of the absolute
value of the residuals is also a bad criterion.

http://numericalmethods.eng.usf.edu 11
11/14/2007

Least Squares Criterion


The least squares criterion minimizes the sum of the square of the
residuals in the model, and also produces a unique line.
n n 2

S r = ∑ ε i = ∑ ( yi − a0 − a1 xi )
2

i =1 i =1
y

xi , yi

ε i = y i − a 0 − a1 x i xn , y n

x ,y
2 2
x3 , y3

x ,y
εi = yi − a0 − a1xi
1 1

x
Figure. Linear regression of y vs. x data showing residuals at a typical point, xi .

Finding Constants of Linear Model


n n 2

Minimize the sum of the square of the residuals: S r = ∑ ε i = ∑ ( yi − a0 − a1 xi )


2

i =1 i =1
To find a0 and a1 we minimize S r with respect to a0 and a1 .
∂S r n
= −2∑ ( yi − a 0 − a1 xi )(− 1) = 0
∂a0 i =1

∂S r n
= −2∑ ( yi − a0 − a1 xi )(− xi ) = 0
∂a1 i =1

giving
n n n

∑a + ∑a x = ∑ y
0 1 i i
i =1 i =1 i =1 Solve for a0 and a1
n n n

∑a x + ∑a x
i =1
0 i
i =1
1 i
2
= ∑ yi xi
i =1

http://numericalmethods.eng.usf.edu 12
11/14/2007

Example 1
The torque, T needed to turn the torsion spring of a mousetrap through
an angle, is given below. Find the constants for the model given by
T = k1 + k 2θ

Table: Torque vs Angle for a 0.4


torsional spring

Angle, θ Torque, T
Torque (N-m)

0.3

Radians N-m
0.2
0.698132 0.188224
0.959931 0.209138
1.134464 0.230052 0.1
0.5 1 1.5 2
1.570796 0.250965
θ (radians)
1.919862 0.313707
Figure. Data points for Angle vs. Torque data

Example 1 cont.
The following table shows the summations needed for the calculations of
the constants in the regression model.
Table. Tabulation of data for calculation of important
summations
Using equations described for
θ T θ2 Tθ a0 and a1 with n = 5
Radians N-m Radians2 N-m-Radians n n n
0.698132 0.188224 0.487388 0.131405 ∑a + ∑a x = ∑ y
i =1
0
i =1
1 i
i =1
i
0.959931 0.209138 0.921468 0.200758
1.134464 0.230052 1.2870 0.260986 n n n

1.570796 0.250965 2.4674 0.394215 ∑a x + ∑a x


i =1
0 i
i =1
1 i
2
= ∑ yi xi
i =1
1.919862 0.313707 3.6859 0.602274

6.2831 1.1921 8.8491 1.5896 a0 = 1.1767 × 10 −1 N-m

a1 = 9.6091 × 10 -2 N-m/rad

http://numericalmethods.eng.usf.edu 13
11/14/2007

Example 1 Results
Using linear regression, a trend line is found from the data

0.4
T = 0.11767 + 0.096091θ
Torque (N-m)

0.3

0.2

0.1
0.5 1 1.5 2
θ (radians)

Figure. Linear regression of Torque versus Angle data

Can you find the energy in the spring if it is twisted from 0 to 180 degrees?

http://numericalmethods.eng.usf.edu 14
11/14/2007

Generalized Polynomial Model


Given ( x1, y1), ( x 2, y 2), ... , ( xn, yn ) best fit y = a + a x + ... + a x
m
0 1 m
( m ≤ n − 2) to a given data set.

( xn , yn )

(x , y )
2 2

(x , y ) y = a + a x + K+ a x m
i i 0 1 m
yi − f ( xi )
(x , y )
1 1

Figure. Polynomial model

Generalized Polynomial Model cont.


The residual at each data point is given by
E i = y i − a 0 − a1 xi − . . . − a m xim
The sum of the square of the residuals then is
n
S r = ∑ Ei2
i =1

( )
n
= ∑ y i − a 0 − a1 xi − . . . − a m xim
2

i =1

http://numericalmethods.eng.usf.edu 15
11/14/2007

Generalized Polynomial Model cont.


To find the constants of the polynomial model, we set the derivatives
with respect to ai where i = 1,K m, equal to zero.
∂S r
( )
n
= ∑ 2. yi − a0 − a1 xi − . . . − am xim (−1) = 0
∂a0 i =1
∂S r
( )
n
= ∑ 2. yi − a0 − a1 xi − . . . − am xim (− xi ) = 0
∂a1 i =1

M M M M
∂S r
( )
n
= ∑ 2. yi − a0 − a1 xi − . . . − am xim (− xim ) = 0
∂am i =1

Generalized Polynomial Model cont.


These equations in matrix form are given by
⎡ ⎤ ⎡ ⎤
⎢ ⎥ ⎢ n ⎥
⎢ n ⎛ n
⎞ ⎛n

⎜ ∑ xi ⎟ .⎜ ∑ xim ⎟ ⎥ a
⎤ ⎢ ∑ yi
. . ⎢ ⎥
⎢ ⎝ i =1 ⎠ ⎝ i =1 ⎠ ⎥ ⎡ 0 ⎥
⎢⎛ n ⎢ ⎥ ⎢ ni =1
⎢ ⎜ ∑ xi ⎞⎟ ⎛ n 2⎞ ⎛ n m+1 ⎞⎥⎥ ⎢a1 ⎥
⎜ ∑ xi ⎟ . . .⎜ ∑ xi ⎟ ⎥=⎢
∑ xi yi ⎥
⎢ ⎝ i =1 ⎠ ⎝ i =1 ⎠ ⎝ i =1 ⎠⎥ ⎢. . .⎥ ⎢ i =1 ⎥
⎢. . . . . . . . . . . ⎥ ⎢a ⎥ ⎢. . . ⎥
⎢ ⎥⎣ m ⎦ ⎢ ⎥
⎢⎛ n m ⎞ ⎛ n m+1 ⎞ ⎛ n 2m ⎞ ⎥
n
⎢∑ xim yi ⎥
⎢⎜ ∑ xi ⎟ ⎜ ∑ xi ⎟ . . .⎜ ∑ xi ⎟ ⎥
⎣⎢ i =1 ⎦⎥
⎣⎝ i =1 ⎠ ⎝ i =1 ⎠ ⎝ i =1 ⎠⎦

The above equations are then solved for a0 , a1 , K , am

http://numericalmethods.eng.usf.edu 16
11/14/2007

Nonlinear Regression

http://numericalmethods.eng.usf.edu 17
11/14/2007

Nonlinear equations in C, A
Newton’s method can be used for the solution
(fminsearch command in MATLAB )

Linearization of Data

For some models which require simultaneous solution of nonlinear


equations, data linearization may be mathematically convenient.
For example, the data for an exponential model can be linearized.
(See Table 5.6 in the textbook for other models)

http://numericalmethods.eng.usf.edu 18
11/14/2007

http://numericalmethods.eng.usf.edu 19
11/14/2007

http://numericalmethods.eng.usf.edu 20

You might also like