Professional Documents
Culture Documents
Interpolation Curvefitting
Interpolation Curvefitting
Interpolation
Given a set of discrete values (x0,y0), (x1,y1), …… (xn,yn),
find a function that matches these values exactly. The
resulting function can then be used to estimate the value of
‘y’ at a value of ‘x’ that is not given.
Interpolants
Polynomials are the most common
choice of interpolants because they
are easy to:
Evaluate
Differentiate, and
Integrate.
http://numericalmethods.eng.usf.edu 1
11/14/2007
Direct Method
Given ‘n+1’ data points (x0,y0), (x1,y1),………….. (xn,yn),
pass a polynomial of order ‘n’ through the data as given
below:
y = a0 + a1 x + .................... + an x n .
where a0, a1,………………. an are real constants.
Set up ‘n+1’ equations to find ‘n+1’ constants.
Linear Interpolation
The upward velocity of a rocket is given as a
function of time. Find the velocity at t=16
seconds using the direct method for linear
interpolation.
t v(t)
s m/s
0 0
10 227.04
15 362.78
20 517.35
22.5 602.97
30 901.67
Velocity vs. time data for the rocket example
Velocity as a function of time
http://numericalmethods.eng.usf.edu 2
11/14/2007
Linear Interpolation
v(t ) = a0 + a1t
550
517.35
ys
400
Hence
v(t ) = −100.91 + 30.913t , 15 ≤ t ≤ 20.
v(16) = −100.91 + 30.913(16) = 393.7m / s
30 901.67
550
517.35
450
ys
400
f ( range)
( )
v(16 ) = 12.001 + 17.740(16 ) + 0.37637 (16 )
f x desired 350
2
300
250 = 392.19m / s
227.04 200
10 12 14 16 18 20
10 x s , range, x desired 20
http://numericalmethods.eng.usf.edu 3
11/14/2007
t v(t)
Cubic Interpolation
s m/s Find the velocity at t=16 seconds using the
0 0 direct method for cubic interpolation.
15 362.78
20 517.35
22.5 602.97
v(t ) = −4.3810 + 21.289t + 0.13064t 2 + 0.0054606t 3 , 10 ≤ t ≤ 22.5
v (16 ) = − 4 .3810 + 21 .289 (16 ) + 0 .13064 (16 ) + 0 .0054606 (16 ) = 392 .06 m / s
30 901.67 2 3
700
602.97
600
ys 500
f ( range)
(
f x desired )
400
300
227.04 200
10 12 14 16 18 20 22 24
10 x s , range, x desired 22.5
a (t ) =
d
dt
v(t ) =
d
dt
(
− 4.3810 + 21.289t + 0.13064t 2 + 0.0054606t 3 )
= 21.289 + 0.26130t + 0.016382t 2 , 10 ≤ t ≤ 22.5
http://numericalmethods.eng.usf.edu 4
11/14/2007
Lagrangian Interpolation
http://numericalmethods.eng.usf.edu 5
11/14/2007
http://numericalmethods.eng.usf.edu 6
11/14/2007
http://numericalmethods.eng.usf.edu 7
11/14/2007
For N=1
Interpolation 0≤x≤2
Extrapolation x<0,x>2
http://numericalmethods.eng.usf.edu 8
11/14/2007
http://numericalmethods.eng.usf.edu 9
11/14/2007
where
b0 = f ( x 0 )
f ( x1 ) − f ( x 0 )
b1 =
x1 − x 0
http://numericalmethods.eng.usf.edu 10
11/14/2007
Quadratic Interpolation
Given ( x0 , y 0 ), ( x1 , y1 ), and ( x 2 , y 2 ), fit a quadratic interpolant through the data.
f 2 ( x) = b0 + b1 ( x − x0 ) + b2 ( x − x 0 )( x − x1 )
b0 = f ( x0 )
f ( x1 ) − f ( x0 )
b1 =
x1 − x0
f ( x 2 ) − f ( x1 ) f ( x1 ) − f ( x0 )
−
x 2 − x1 x1 − x0
b2 =
x2 − x0
General Form
f 2 ( x) = b0 + b1 ( x − x0 ) + b2 ( x − x0 )( x − x1 )
where
b0 = f [ x0 ] = f ( x0 )
f ( x1 ) − f ( x 0 )
b1 = f [ x1 , x0 ] =
x1 − x0
f ( x 2 ) − f ( x1 ) f ( x1 ) − f ( x0 )
−
f [ x 2 , x1 ] − f [ x1 , x 0 ] x 2 − x1 x1 − x0
b2 = f [ x 2 , x1 , x 0 ] = =
x 2 − x0 x 2 − x0
Rewriting
f 2 ( x) = f [ x0 ] + f [ x1 , x0 ]( x − x0 ) + f [ x 2 , x1 , x0 ]( x − x0 )( x − x1 )
http://numericalmethods.eng.usf.edu 11
11/14/2007
General Form
Given (n + 1) data points, ( x0 , y 0 ), ( x1 , y1 ),......, ( x n −1 , y n −1 ), ( x n , y n ) as
f n ( x) = b0 + b1 ( x − x0 ) + .... + bn ( x − x0 )( x − x1 )...( x − x n −1 )
where
b0 = f [ x0 ]
b1 = f [ x1 , x0 ]
b2 = f [ x 2 , x1 , x0 ]
M
bn −1 = f [ x n −1 , x n − 2 ,...., x0 ]
bn = f [ x n , x n −1 ,...., x0 ]
General form
The third order polynomial, given ( x0 , y 0 ), ( x1 , y1 ), ( x 2 , y 2 ), and ( x3 , y 3 ), is
f 3 ( x) = f [ x0 ] + f [ x1 , x0 ]( x − x0 ) + f [ x 2 , x1 , x0 ]( x − x0 )( x − x1 )
+ f [ x3 , x 2 , x1 , x0 ]( x − x0 )( x − x1 )( x − x 2 )
b0
x0 f ( x0 ) b1
f [ x1 , x0 ] b2
x1 f ( x1 ) f [ x 2 , x1 , x0 ] b3
f [ x 2 , x1 ] f [ x3 , x 2 , x1 , x0 ]
x2 f ( x2 ) f [ x3 , x 2 , x1 ]
f [ x3 , x 2 ]
x3 f ( x3 )
http://numericalmethods.eng.usf.edu 12
11/14/2007
t v(t)
s m/s
0 0
10 227.04
15 362.78
20 517.35
22.5 602.97
30 901.67
Table 1: Velocity as a
function of time
t v(t)
s m/s
0 0
10 227.04
15 362.78
20 517.35
22.5 602.97
30 901.67
http://numericalmethods.eng.usf.edu 13
11/14/2007
http://numericalmethods.eng.usf.edu 14
11/14/2007
1
f ( x) =
1 + 25 x 2
Table : Six equidistantly spaced points in [-1, 1]
1
x y=
1 + 25 x 2
-1.0 0.038461
-0.6 0.1
-0.2 0.5
0.2 0.5
0.6 0.1
http://numericalmethods.eng.usf.edu 1
11/14/2007
1.2
0.8
0.4
y
0
-1 -0.5 0 0.5 1
-0.4
-0.8
x
19th Order Polynomial f (x) 5th Order Polynomial
http://numericalmethods.eng.usf.edu 2
11/14/2007
http://numericalmethods.eng.usf.edu 3
11/14/2007
http://numericalmethods.eng.usf.edu 4
11/14/2007
http://numericalmethods.eng.usf.edu 5
11/14/2007
Linear Regression
(Linear Curve Fitting)
http://numericalmethods.eng.usf.edu 6
11/14/2007
What is Regression?
Given n data points ( x1, y1), ( x 2, y 2), ... , ( xn, yn)
best fit y = f (x ) to the data. The best fit is generally based on
minimizing the sum of the square of the residuals, Sr .
Residual at a point is ( x n , yn )
εi = yi − f ( xi)
y = f (x )
Sum of the square of the residuals
n
Sr = ∑ ( yi − f ( xi )) 2
( x1, y1)
i =1
Figure. Basic model for regression
Linear Regression-Criterion#1
Given n data points ( x1, y1), ( x 2, y 2), ... , ( xn, yn) best fit y = a 0 + a 1 x
to the data.
y
xi , yi
ε i = y i − a 0 − a1 x i x ,y
n n
x ,y
2 2
x3 , y3
x ,y
εi = yi − a0 − a1xi
1 1
x
Figure. Linear regression of y vs. x data showing residuals at a typical point, xi .
n
Does minimizing ∑ε
i =1
i work as a criterion, where εi = yi − ( a 0 + a1 xi )
http://numericalmethods.eng.usf.edu 7
11/14/2007
Example: Given the data points (2,4), (3,6), (2,6) and (3,8), best fit
the data to a straight line using Criterion#1
8
x y
6
2.0 4.0
y
3.0 6.0 4
2
2.0 6.0
3.0 8.0 0
0 1 2 3 4
x
Linear Regression-Criterion#1
Using y=4x-4 as the regression curve
x y ypredicted ε = y - ypredicted 8
∑ε
0 1 2 3 4
i =0
i =1 x
http://numericalmethods.eng.usf.edu 8
11/14/2007
Linear Regression-Criterion#1
Using y=6 as a regression curve
y
4
2.0 6.0 6.0 0.0
2
3.0 8.0 6.0 2.0
4 0
∑ε
i =1
i
=0 0 1 2 3 4
x
∑ε
i =1
i = 0 for both regression models of y=4x-4 and y=6.
http://numericalmethods.eng.usf.edu 9
11/14/2007
Linear Regression-Criterion#2
n
Will minimizing ∑ε
i =1
i work any better?
xi , yi
ε i = y i − a 0 − a1 x i x ,y
n n
x2 , y2
x3 , y3
x1 , y 1 εi = yi − a0 − a1xi
x
Figure. Linear regression of y vs. x data showing residuals at a typical point, xi .
Linear Regression-Criterion 2
Using y=4x-4 as the regression curve
http://numericalmethods.eng.usf.edu 10
11/14/2007
Linear Regression-Criterion#2
Using y=6 as a regression curve
y
4
3.0 6.0 6.0 0.0
2
2.0 6.0 6.0 0.0
0
3.0 8.0 6.0 2.0 0 1 2 3 4
4 x
∑ε
i =1
i =4
Figure. Regression curve for y=6, y vs. x data
Linear Regression-Criterion#2
4
∑ε
i =1
i = 4 for both regression models of y=4x-4 and y=6.
The sum of the errors has been made as small as possible, that
is 4, but the regression model is not unique.
Hence the above criterion of minimizing the sum of the absolute
value of the residuals is also a bad criterion.
http://numericalmethods.eng.usf.edu 11
11/14/2007
S r = ∑ ε i = ∑ ( yi − a0 − a1 xi )
2
i =1 i =1
y
xi , yi
ε i = y i − a 0 − a1 x i xn , y n
x ,y
2 2
x3 , y3
x ,y
εi = yi − a0 − a1xi
1 1
x
Figure. Linear regression of y vs. x data showing residuals at a typical point, xi .
i =1 i =1
To find a0 and a1 we minimize S r with respect to a0 and a1 .
∂S r n
= −2∑ ( yi − a 0 − a1 xi )(− 1) = 0
∂a0 i =1
∂S r n
= −2∑ ( yi − a0 − a1 xi )(− xi ) = 0
∂a1 i =1
giving
n n n
∑a + ∑a x = ∑ y
0 1 i i
i =1 i =1 i =1 Solve for a0 and a1
n n n
∑a x + ∑a x
i =1
0 i
i =1
1 i
2
= ∑ yi xi
i =1
http://numericalmethods.eng.usf.edu 12
11/14/2007
Example 1
The torque, T needed to turn the torsion spring of a mousetrap through
an angle, is given below. Find the constants for the model given by
T = k1 + k 2θ
Angle, θ Torque, T
Torque (N-m)
0.3
Radians N-m
0.2
0.698132 0.188224
0.959931 0.209138
1.134464 0.230052 0.1
0.5 1 1.5 2
1.570796 0.250965
θ (radians)
1.919862 0.313707
Figure. Data points for Angle vs. Torque data
Example 1 cont.
The following table shows the summations needed for the calculations of
the constants in the regression model.
Table. Tabulation of data for calculation of important
summations
Using equations described for
θ T θ2 Tθ a0 and a1 with n = 5
Radians N-m Radians2 N-m-Radians n n n
0.698132 0.188224 0.487388 0.131405 ∑a + ∑a x = ∑ y
i =1
0
i =1
1 i
i =1
i
0.959931 0.209138 0.921468 0.200758
1.134464 0.230052 1.2870 0.260986 n n n
a1 = 9.6091 × 10 -2 N-m/rad
http://numericalmethods.eng.usf.edu 13
11/14/2007
Example 1 Results
Using linear regression, a trend line is found from the data
0.4
T = 0.11767 + 0.096091θ
Torque (N-m)
0.3
0.2
0.1
0.5 1 1.5 2
θ (radians)
Can you find the energy in the spring if it is twisted from 0 to 180 degrees?
http://numericalmethods.eng.usf.edu 14
11/14/2007
( xn , yn )
(x , y )
2 2
(x , y ) y = a + a x + K+ a x m
i i 0 1 m
yi − f ( xi )
(x , y )
1 1
( )
n
= ∑ y i − a 0 − a1 xi − . . . − a m xim
2
i =1
http://numericalmethods.eng.usf.edu 15
11/14/2007
M M M M
∂S r
( )
n
= ∑ 2. yi − a0 − a1 xi − . . . − am xim (− xim ) = 0
∂am i =1
http://numericalmethods.eng.usf.edu 16
11/14/2007
Nonlinear Regression
http://numericalmethods.eng.usf.edu 17
11/14/2007
Nonlinear equations in C, A
Newton’s method can be used for the solution
(fminsearch command in MATLAB )
Linearization of Data
http://numericalmethods.eng.usf.edu 18
11/14/2007
http://numericalmethods.eng.usf.edu 19
11/14/2007
http://numericalmethods.eng.usf.edu 20