Chapter 5 Curve Fitting

You might also like

Download as pdf or txt
Download as pdf or txt
You are on page 1of 93

CHAPTER 5

CURVE FITTING
Presenter: Dr. Zalilah Sharer
© 2018 School of Chemical and Energy Engineering
Universiti Teknologi Malaysia
23 September 2018
TOPICS
Linear regression (exponential model, power
equation and saturation growth rate equation)

Polynomial Regression

Polynomial Interpolation (Linear


interpolation, Quadratic Interpolation, Newton
DD)

Lagrange Interpolation
Curve Fitting
• Curve fitting describes techniques to fit curves at points
between the discrete values to obtain intermediate
estimates.
• Two general approaches for curve fitting:
a) Least –Squares Regression - to fits the shape or
general trend by sketch a best line of the data without
necessarily matching the individual points (figure
PT5.1, pg 426).
- 2 types of fitting:
i) Linear Regression
ii) Polynomial Regression
Figure shows sketches developed from same set
of data by 3 engineers.
a) least-squares regression - did not
attempt to connect the point, but
characterized the general upward
trend of the data with a straight line
b) Linear interpolation - Used straight-
line segments or linear
interpolation to connect the points.
Very common practice in
engineering. If the values are close
to being linear, such approximation
provides estimates that are
adequate for many engineering
calculations. However, if the data
is widely spaced, significant errors
can be introduced by such linear
interpolation.
c) Curvilinear interpolation - Used
curves to try to capture suggested
by the data.
Our goal here to develop
systematic and objective
method deriving such curves.
a) Least-square Regression
: i) Linear Regression
• Is used to minimize the discrepancy/differences between the
data points and the curve plotted. Sometimes, polynomial
interpolation is inappropriate and may yield unsatisfactory
results when used to predict intermediate values (see Fig.
17.1, pg 455).

Fig. 17.1 a): shows 7


experimentally
derived data points
exhibiting significant
variability. Data
exhibiting significant
error.
Curve Fitting
Linear Regression is fitting a ‘best’ straight line through the points.
The mathematical expression for the straight line is:

y = a0+a1x+e Eq 17.1
where, a1- slope
a0 - intercept
e - error, or residual, between the model
and the observations
Rearranging the eq. above as:
e = y - a 0 - a1 x

Thus, the error or residual, is the discrepancy between the true value y
and the approximate value, a0+a1x, predicted by the linear equation.
Criteria for a ‘best’ Fit
• To know how a “best” fit line through the data is by minimize the sum
of residual error, given by ;
n n

∑ e =∑ ( y
i =1
i
i =1
i − a0 − a1 xi ) ----- Eq 17.2

where; n : total number of points

• A strategy to overcome the shortcomings: The sum of the squares of


the errors between the measured y and the y calculated with the linear
model is shown in Eq 17.3;

n n n
S r = ∑ e =∑ ( yi ,measured − yi ,mod el ) = ∑ ( yi − a0 − a1 xi ) 2
2
i
2
----- Eq 17.3
i =1 i =1 i =1
Least-squares fit for a straight line
• To determine values for ao and a1, i) differentiate equation
17.3 with respect to each coefficient, ii) setting the
derivations equal to zero (minimize Sr), iii) set Σao = n.ao to
give equations 17.4 and 17.5, called as normal equations,
(refer text book) which can be solved simultaneously for a1
and ao;
n ∑ xi yi − ∑ x∑ yi
a1 = i
----- Eq 17.6
n∑ x − (∑ x )
2 2
i i

a 0 = y − a1 x ----- Eq 17.7
Example 1
Use least-squares regression to fit a straight line to:

x 1 2 3 4 5 6 7

y 0.5 2.5 2.0 4.0 3.5 6.0 5.5


• Two criteria for least-square regression will provide the best estimates
of ao and a1 called maximum likelihood principle in statistics:
i. The spread of the points around the line of similar magnitude along
the entire range of the data.
ii. The distribution of these points about the line is normal.

• If these criteria are met, a “standard deviation” for the regression line
is given by equation:

---------- Eq. 17.9


Sr
Sy =
x n−2
sy/x : standard error of estimate
“y/x” : predicted value of y corresponding to a particular value of x
n -2 : two data derived estimates ao and a1 were used to compute Sr
(we have lost 2 degree of freedom)
• Equation 17.9 is derived from Standard Deviation (Sy)
about the mean :

St -------- (PT5.2, pg 442 )


Sy =
n −1

S t = ∑ ( yi − y ) 2 -------- (PT5.3, pg 442 )

St : total sum of squares of the residuals between data


points and the mean.

• Just as the case with the standard deviation, the


standard error of the estimate quantifies the spread of
the data.
Estimation of error in summary
1. Standard Deviation

St ----- (PT5.2, pg 442 )


Sy =
n −1
S t = ∑ ( yi − y ) 2 ----- (PT5.3, pg 442 )

2. Standard error of the estimate


n n
Sr = ∑ ei = ∑ ( yi − a0 − a1x)2
2

i =1 i =1 ----- Eq 17.8
S
Sy = r
x n−2 ----- Eq 17.9

where, y/x designates that the error is for a predict value of y


corresponding to a particular value of x.
3. Determination coefficient

St − S ----- Eq 17.10
r 2
= r
St

4. Correlation coefficient

St − Sr n ∑ x i y i − ( ∑ x i )( ∑ y i )
r = @r = ----- Eq 17.11
St n ∑ x i2 − ( ∑ x i ) 2 n ∑ y i2 − ( ∑ y i ) 2
Example 2
Use least-squares regression to fit a straight line to:

x 1 2 3 4 5 6 7

y 0.5 2.5 2.0 4.0 3.5 6.0 5.5

Compute the standard deviation (Sy), the standard error


of estimate (Sy/x) and the correlation coefficient (r) for
data above (use Example 1 result)
Work with your buddy and lets do
Quiz 1
Use least-squares regression to fit a straight line to:

x 1 2 3 4 5 6 7 8 9

y 1 1.5 2 3 4 5 8 10 13

Compute the standard error of estimate (Sy/x) and the


correlation coefficient (r)
Quiz 2
Compute the standard error of the estimate and the
correlation coefficient.

x 0.25 0.75 1.25 1.50 2.00

y -0.45 -0.60 0.70 1.88 6.00


Linearization of Nonlinear
Relationships

• Linear regression provides a powerful


technique for fitting the best line to data, where
the relationship between the dependent and
independent variables is linear.
• But, this is not always the case, thus first step in
any regression analysis should be to plot and
visually inspect whether the data is a linear
model or not.
Figure 17.8: a) data is ill-suited for linear regression,
b) parabola is preferable.
Nonlinear Relationships
• Linear regression is predicated on the fact that the
relationship between the dependent and
independent variables is linear - this is not always
the case.
• Three common examples are:

exponential : y = α1eβ1 x

power : y = α2 x β2
x
saturation - growth - rate : y = α 3
β3 + x
Linearization of Nonlinear
Relationships
• One option for finding the coefficients for a nonlinear fit is
to linearize it. For the three common models, this may
involve taking logarithms or inversion:

Model Nonlinear Linearized

exponential : y = α1eβ1 x ln y = ln α1 + β1 x

power : y = α2 x β2 log y = log α 2 + β2 log x


x 1 1 β3 1
saturation - growth - rate : y = α 3 = +
β3 + x y α3 α3 x
Linearization of Nonlinear
Relationships
• After linearization, Linear regression can be applied to
determine the linear relation.
• For example, the linearized exponential equation:

ln y = ln α1 + β1 x

y a0 a1 x
Figure 17.9: Type of polynomial equations and their linearized
versions, respectively.
• Fig. 17.9, pg 453 shows population growth of radioactive
decay behavior.

Fig. 17.9 (a) : the exponential model

y = α1eβ x
1 ------ (17.12)

α1 , β1 : constants, β1 ≠ 0

This model is used in many fields of engineering to


characterize quantities.
Quantities increase : β1 positive
Quantities decrease : β1 negative
Example 2
Fit an exponential model y = a ebx to:

x 0.4 0.8 1.2 1.6 2.0 2.3

y 750 1000 1400 2000 2700 3750

Solution
• Linearized the model into;
ln y = ln a + bx

y = a 0 + a1x ----- (Eq. 17.1)

• Build the table for the parameters used in eqs 17.6 and 17.7, as
in example 17.1, pg 444.
xi yi ln yi xi2 (xi)(ln yi)
0.4 750 6.620073 0.16 2.648029
0.8 1000 6.900775 0.64 5.520620
1.2 1400 7.244228 1.44 8.693074
1.6 2000 7.600902 2.56 12.161443
2.0 2700 7.901007 4.00 15.802014
2.3 3750 8.229511 5.29 18.927875
Σ 8.3 44.496496 14.09 63.753055

n=6
n n

∑x
i =1
i = 8 .3 ∑ ln
i =1
y i = 44 . 496496
n n

∑x
i =1
2
i
= 14 . 09 ∑ ( x )(ln
i =1
i y i ) = 63 . 753055

8 .3 44 . 496496
x= = 1 . 383333 ln y = = 7 . 416083
6 6
a 0 = ln a = ln y − b x = 7 . 416083 − ( 0 . 843 )(1 . 383333 )
ln a = 6 . 25

n Σ ( x i )(ln y i ) − Σ x i Σ (ln y i )
a1 = b =
n Σ x i2 − ( Σ x i ) 2
( 6 )( 63 . 753055 ) − ( 8 . 3 )( 44 . 496496 )
b = = 0 . 843
( 6 )( 14 . 09 ) − ( 8 . 3 ) 2
Straight-line:
ln y = ln a + bx
∴ ln y = 6 .25 + 0 .843 x

Exponential: y = a e bx

ln a = 6 .25 ⇒ a = e 6.25 = 518


∴ y = a e bx = 518 e 0.843 x
Figure 17.9: Type of polynomial equations and their linearized
versions, respectively.
Power Equation
• Equation (17.13 ) can be linearized by taking base-10
logarithm to yield:

y = α2xβ 2 -------- (17.13)


log y = log α 2 + β 2 log x -------- (17.16)

• A plot of log y versus log x will yield a straight line with


slope of β2 and an intercept of log α2.
Example 4

Linearization of a Power equation and fit equation


(17.13) to the data in table below using a logarithmic
transformation of the data.

x 1 2 3 4 5

y 0.5 1.7 3.4 5.7 8.4


xi yi log xi log yi (log xi)2 (log xi)(log yi)

1 0.5 0 -0.301 0 0
2 1.7 0.301 0.226 0.090601 0.068026
3 3.4 0.477 0.534 0.227529 0.254718
4 5.7 0.602 0.753 0.362404 0.453306
5 8.4 0.699 0.922 0.488601 0.644478
Σ 2.079 2.134 1.169135 1.420528
n=5
n n

∑ log x
i =1
i = 2.079 ∑ log y
i =1
i = 2.134
n n

∑ (log x )
i =1
i
2
= 1.169135 ∑ (log x )(log y ) = 1.420528
i =1
i i

2.079 2.134
log x = = 0.4158 log y = = 0.4268
5 5

nΣ(log x i )(log y i ) − (Σ log x i )( Σ log y i )


b=
nΣ(log x i ) 2 − (Σ log x i ) 2
(5)(1.420528 ) − (2.079 )(2.134 )
b= = 1.75
(5)(1.169135 ) − (2.079 ) 2
log a = log y − b (log x ) = 0 . 4268 − (1 . 75 )( 0 . 4158 )
ln a = − 0 . 3

Straight-line:

log y = log a + b log x


∴ log y = − 0 . 3 + 1 . 75 log x

Power: y = a xb

log a = −0.3 ⇒ a = 10−0.3 = 0.5

∴ y = a x b = 0.5 x1.75
• Fig. 17.10 a), pg 455, is
a plot of the original
data in its
untransformed state,
while fig. 17.10 b) is a
plot of the transformed
data.
• The intercept, log α2 =
-0.300, and by taking
the antilogarithm, α2
= 10-0.3 = 0.5.
• The slope is β2 = 1.75,
consequently, the
power equation is : y
= 0.5x1.75
Figure 17.9: Type of polynomial equations and their linearized
versions, respectively.
Saturation growth rate equation
• Equation (17.14) can be linearized by inverting it to yield:
 x 
y = α3
 β3 + x  ------- (17.14)

1 β3 1 1
= +
y α3 x α3 ------- (17.17)

• A plot of 1/y versus 1/x will yield a straight line with slope of
β3/α3 and an intercept of 1/α3
• In their transformed forms, these models are fit using linear
regression in order to evaluate the constant coefficients.
• This model well-suited for characterizing population growth
under limiting conditions.
Example 5

Linearization of a saturation-growth
rate equation to the data in table below.

x 0.75 2 2.5 4 6 8 8.5


y 0.8 1.3 1.2 1.6 1.7 1.8 1.7
n=7
n n
1 1

i =1 xi
= 2.8926 ∑
i =1 yi
= 5.2094
2
n
1  n
1  1 
∑ 
i =1  xi
 = 2.3074 ∑ 
i =1  xi
   = 2.8127
   yi 
 1  2.8926  1  5.2094
 = = 0.4132   = = 0.7442
 x 7  y 7
xi yi 1/ xi 1/ yi (1/xi)2 (1/ xi)(1/yi)

0.75 0.8 1.33333 1.25000 1.7777 1.6666


2 1.3 0.50000 0.76923 0.2500 0.3846
2.5 1.2 0.40000 0.83333 0.1600 0.3333
4 1.6 0.25000 0.62500 0.0625 0.1562
6 1.7 0.16667 0.58823 0.0278 0.0981
8 1.8 0.12500 0.55555 0.0156 0.0694
8.5 1.7 0.11765 0.58823 0.0138 0.1045
Σ 2.89260 5.20940 2.3074 2.8127

 1  1   1   1 
n Σ     − Σ   Σ  
b x
 i  i y x
 i   i  y
= 2
a  1   1  2
nΣ 
 − ( Σ   )
 xi   xi 
b ( 7 )( 2 . 8127 ) − ( 2 . 8926 )( 5 . 2094 )
= = 0 . 5935
a ( 7 )( 2 . 3074 ) − ( 2 . 8926 ) 2
1 1 b 1
  =   −   = 0 . 7442 − ( 0 . 5935 )( 0 . 4132 )
a  y a x
1
  = 0 . 4990
a
1 1 b1
Straight-line: = +
y a a x

1 1
∴ = 0.4990 + 0.5935
y x

 x 
Saturation-growth: y = a  b + x 
1
= 0 .4990 ⇒ ∴ a = 2
a
b
= 0 .5935 ⇒ ∴ b = ( 0 .5935 )( 2 ) = 1 .187
a
 x 
∴ y = 2 
 1 .187 + x 
Lets do Quiz 3

Fit a power equation and saturation


growth rate equation to:

x 1 2 3 4 5 6 7
y 2.1 2.2 2.3 2.4 2.5 2.6 2.7
Figure 17.8: a) data is ill-suited for linear regression, b) parabola is
preferable.
Polynomial Regression
• Another alternative is to fit polynomials to the data using
polynomial regression.

• The least-squares procedure can be readily extended to fit the


data to a higher-order polynomial.

• For example, to fit a second–order polynomial or quadratic:

y = a0 + a1x + a2 x 2 + e

• The sum of the squares of the residual is:


n
S r = ∑ ( yi − a0 − a1 xi − a 2 xi2 ) 2
i =1

where n= total number of points


• Then, taking the derivative of equation (17.18) with respect to
each of the unknown coefficients, ao , a1 ,and, a2 of the
polynomial, as in:

δ Sr
= − 2 ∑ ( y i − a 0 − a1 x i − a 2 x i2 )
δ a0
δ Sr
= − 2 ∑ x i ( y i − a 0 − a1 x i − a 2 x i2 )
δ a1
δ Sr
= − 2 ∑ x i2 ( y i − a 0 − a1 x i − a 2 x i2 )
δ a2

• Setting the equations equal to zero and rearrange to develop set


of normal equations and by setting Σao = n.ao
( n ) a0 + ( ∑ xi ) a1 + ( ∑ xi2 ) a 2 = ∑ yi
( ∑ xi ) a 0 + ( ∑ xi2 ) a1 + ( ∑ xi3 ) a 2 = ∑ xi yi ----- 17.19

( ∑ xi2 ) a0 + ( ∑ xi3 ) a1 + ( ∑ xi4 ) a 2 = ∑ xi2 yi


• The above 3 equations are linear with 3 unknowns coefficients (ao ,
a1 ,and, a2) which can be calculated directly from observed data.
• In matrix form:
 n Σ xi Σxi2   a0   Σ yi 
 
 Σxi Σxi2 Σxi3  ×  a1  =  Σxi yi 
   
Σxi2 Σxi3 Σxi   a2  Σxi yi 
4 2

• The two-dimensional case can be easily extended to an mth-order


polynomial as:

y = a 0 + a1 x + a 2 x 2 + ... + a m x m + e
• Thus, standard error for mth-order polynomial :

Sr
S y/x = ----- 17.20
n − ( m + 1)
Example 6
Fit a second order polynomial to the data in the first 2 columns of
table 17.4:
xi yi x i2 x i3 x i4 xi y i xi2 yi

0 2.1 0 0 0 0 0
1 7.7 1 1 1 7.7 7.7
2 13.6 4 8 16 27.2 54.4
3 27.2 9 27 81 81.6 244.8
4 40.9 16 64 256 163.6 654.4
5 61.1 25 125 625 305.5 1527.5
Σ 15 152.6 55 225 979 585.6 2488.8

• From the given data:


m=2 Σxi = 15 Σ xi4 = 979 y = 25.433
n=6 Σ yi = 152.6 Σ xiyi = 585.6 Σ xi3 = 225
x = 2.5 Σ xi2 = 55 Σ xi2yi = 2488.8
 n Σ xi Σ xi2   a 0   Σ y i 
 
 Σ xi Σ xi2 Σ xi3  ×  a1  =  Σ xi y i 
 Σ xi2 Σ x i3 Σ xi4   a 3   Σ xi2 y i 

• Therefore, the simultaneous linear equations are:
 6 15 55  ao   152.6 
15 55 225  a  =  585.6 
  1   
55 225 979 a2  2488.8

• Solving these equations through a technique such as Gauss


elimination gives:
ao = 2.47857, a1 = 2.35929, and a2 = 1.86071
• Therefore, the least-squares quadratic equation for this
case is:
y = 2.47857 + 2.35929x + 1.86071x2
• To calculate st and sr , build table 17.4 for columns 3 and
4.
xi yi (yi- y )2 (yi-ao-a1xi-a2xi2)2
0 2.1 544.44 0.14332
1 7.7 314.47 1.00286
2 13.6 140.03 1.08158
3 27.2 3.12 0.80491
4 40.9 239.22 0.61951
5 61.1 1272.11 0.09439
Σ 152.6 2513.39 3.74657

S t = Σ( yi − y ) = 2513.39
2
S r = Σ( yi − a0 − a1 xi − a2 xi ) 2 = 3.74657
The standard error (regression polynomial):

Sr 3.74657
Sy = = = 1.12
x n − (m + 1) 6 − (2 + 1)
• The correlation coefficient can be calculated by using equations
17.10 and 17.11, respectively:
St − S
r 2
= r
St
St − Sr n ∑ x i y i − ( ∑ x i )( ∑ y i )
r= @r=
St n ∑ x i2 − ( ∑ x i ) 2 n ∑ y i2 − ( ∑ y i ) 2

Therefore, r2 = (St – Sr) / St = (2513.39 – 3.74657) / 2513.39


r2 = 0.99851
∴The correlation coefficient is, r = 0.99925

• The results indicate that 99.851% of the original uncertainty has


been explained by the model. This result supports the conclusion
that the quadratic equation represents an excellent fit, as evident
from Fig.17.11.
Figure 17.11: fit of a second-order polynomial
TOPICS

Polynomial Interpolation (Linear


interpolation, Quadratic Interpolation, Newton
DD)

Lagrange Interpolation
Spline Interpolation
Interpolation
• Polynomial Interpolation is a common method to determine
intermediate values between data points.
• General equation for nth order polynomial is:

f ( x ) = a 0 + a1 x + a 2 x 2 ..... + a n x n ----- 18.1

• Polynomial interpolation consists of determining the unique nth-order


polynomial that fits n+1 data point.
• For n+1 data points, there is only one polynomial of order n that
passes through all the points.
• For example, there is only one straight line (first-order polynomial)
that connects two points (Fig. 18.1a) and only one parabola connects
a set of three points (Fig.18.1b).
• Two popular alternative mathematical formats used to
express an interpolating polynomial:
a. Newton polynomial
b. Lagrange polynomial

18.1 Newton’s Divided-Difference Interpolating


Polynomials
• The most popular and useful in polynomial forms.
• Consists of the first- and second-order versions.

18.1.1 Linear Interpolation


• The simplest form of interpolation is to connect two data
points with a straight line.
Figure 18.2: graphical depiction of linear interpolation.
• This linear interpolation technique can be depicted graphically as
shown in fig 18.2, in which, the similar triangles can be
rearranged to yield a linear-interpolation formula;
f ( x1 ) − f ( x 0 )
f1 ( x ) = f ( x 0 ) + ( x − x0 ) ----- 18.2
x1 − x0

- f1(x) is refer to first order interpolation polynomial

f ( x1 ) − f ( x0 )
-The term is a finite-divided-difference
x1 − x0

approximation of the first derivative.

• In general, the smaller the interval between the data points, the
better the approximation.
Figure 18.2: graphical depiction of linear interpolation.
Example 7
Estimate the natural logarithm of 2 using linear
interpolation. First, perform the computation by
interpolating between ln 1 = 0 and ln 6 = 1.791759.
Then, repeat the procedure, but use a smaller interval
from ln 1 to ln 4 (1.386294). Note that the true value
of ln 2 is 0.6931472.
Solution
By using equation (18.2), a linear interpolation for ln 2
from xo = 1 to x1 = 6 to give;
x f(x)
x0=1 f(x0) =0
f ( x1 ) − f ( x0 )
f1 ( x ) = f ( x0 ) + ( x − x0 ) x=2 f1(x)=?
x1 − x0
x1=6 f(x1)=1.791759

1 . 791759 − 0
f1 ( 2 ) = 0 + ( 2 − 1) = 0 .3583519
6 −1

0 .6931472 − 0 .3583519
εt = * 100 = 48 .3 %
0 .6931472
• Then, using the smaller interval from xo = 1 to x1 = 4 yields;

x f(x)
x0=1 f(x0) =0
x=2 f1(x)=?
x1=4 f(x1)=1.386294

1 .386294 − 0
f1 ( 2 ) = 0 + ( 2 − 1) = 0 .4620981
4 −1
0 .6931472 − 0 .4620981
εt = * 100 = 33 .3 %
0 .6931472

• Thus, using the shorter interval reduces the percent


relative error to εt = 33.3%.
• Both interpolations are shown in Fig.18.3, along with true
function.
Figure 18.3: Comparison of two linear interpolations
with different intervals.
Quiz 4
Estimate the logarithm of 5 to the base 10 (log5) using linear
interpolation.

a) Interpolate between log 4=0.60206 and log6=0.7781513


b) Interpolate between log4.5=0.6532125 and log
5.5=0.7403627

For each of the interpolations, compute the percent relative


error based on the true value
18.1.2 Quadratic Interpolation
The error in example 18.1 (linear interpolation) resulted from
approximation of a curve with a straight line.
With 3 data points, the estimation can be improved with a
second-Order Polynomial (quadratic polynomial or
parabola). Thus;

f 2 ( x) = b0 + b1 ( x − x0 ) + b2 ( x − x0 )( x − x1 ) ----- (18.3)

Although equation (18.3) seem to differ from the general


polynomial (equation 18.1), the two equations are equivalent, by
multiplying the terms in equation (18.3) to yield;

f2(x) = bo + b1x – b1xo + b2x2 + b2xox1 – b2xxo – b2xx1


or in collecting terms,
f2(x) = ao + a1x + a2x2
where;
ao = bo – b1xo + b2xox1
a1 = b1 – b2xo – b2x1
a 2 = b2

• Thus, equations 18.1 and 18.3 are alternative, equivalent formulations of


the unique second-order polynomial joining three points.
• To determine the values of coefficient (bo, b1, and, b2), rearrange and use
Eq 18.3, substitute Eq 18.4 into 18.3, and substitute Eq 18.4 and 18.5 into
Eq 18.3 to yield Eq 18.6.
b0 = f ( x0 ) ----- 18.4
f ( x1 ) − f ( x0 )
b1 =
x1 − x0 ----- 18.5
f ( x2 ) − f ( x1 ) f ( x1 ) − f ( x0 )

x2 − x1 x1 − x0
b2 = ----- 18.6
x2 − x0
Example 8

Fit a second-order polynomial to the three points


used in linear interpolation example

xo = 1 f(x0) = 0
x1 = 4 f(x1) =1.386294
x2 = 6 f(x2) =1.791759

Use the polynomial to evaluate ln 2


Solution

b0 = f ( x0 ) = 0
f ( x1 ) − f ( x0 ) 1.386294 − 0
b1 = = = 0.4620981
x − x0 4 −1
f ( x2 ) − f ( x1 ) f ( x1 ) − f ( x0 ) 1.791759 − 1.386294
− − 0.4620981
x2 − x1 x1 − x0 6−4
b2 = = = −0.0518731
x2 − x0 6 −1
Substituting these values into equation 18.3 yields the
quadratic formula:
f2(x) = bo + b1(x – xo) + b2(x – xo)(x – x1)
f2(x) = 0 + 0.462098(x – 1) – 0.0518731(x – 1)(x – 4)
which can be evaluated at x = 2 for;
f2(2) = 0.5658444
which represents a relative error of εt = 18.4%.

• Thus, the curvature introduced by the quadratic


formula (Fig.18.4) improves the interpolation
compared with the result obtained using straight lines
in example 18.1, Fig.18.3.
Figure 18.4: The use of quadratic formula to estimate
ln 2 improves the interpolation.
Example 9
Given the data, calculate f(3.4) using Newton’s
3.4
polynomials of order 1 to 2.
x 1 2 2.5 3 4 5

f(x) 1 5 7 8 2 1

Solution
x0 = 3 ⇒ f ( x0 ) = 8
x1 = 4 ⇒ f ( x1 ) = 2
x 2 = 2 .5 ⇒ f ( x2 ) = 7
or
x2 = 5 ⇒ f ( x2 ) = 1
Use equations 18.4-18.6 to find b0, b1 and b2.

1 st order need f [x0, x1] )


From eq 18.4;

b0 = f ( x0 ) = 8
From eq 18.5;
f ( x1 ) − f ( x0 ) 2 − 8
b1 = = = −6
x1 − x0 4−3

From equation 18.3,

f1 ( x ) = b0 + b1 ( x − x0 )
∴ f1 ( x ) = 8 + ( − 6 )( x − 3)
⇒ f1 (3 .4 ) = 8 + ( − 6 )( 3 .4 − 3) = 5 .6
2nd order (need f [x0, x1, x2] )
From equation 18.6;

f ( x2 ) − f ( x1 ) f ( x1 ) − f ( x0 ) 7 − 2
− − (−6)
x2 − x1 x1 − x0
b2 = = 2.5 − 4 = −5.75
x2 − x0 2.5 − 3
From eq 18.3 for 2nd order
f2 ( x) = b0 + b1 ( x − x0 ) + b2 ( x − x0 )(x − x1 )

∴ f2 ( x) = 8 + (−6)(x − 3) + (−5.75)(x − 3)(x − 4)


⇒ f2 (3.4) = 8 + (−6)(3.4 − 3) + (−5.75)(3.4 − 3)(3.4 − 4)
f2 (3.4) = 6.98
Quiz 5
Given the data, calculate f(4) using Newton’s
polynomial of order 1 to 2

x 1 2 3 5 6
f(x) 4.75 4 5.25 19.75 36
18.1.3 General Form of Newton’s Interpolating Polynomials
• The nth order polynomial is:
fn(x) = bo + b1(x – xo) + b2(x – xo)(x – x1)+ . . .
+ bn(x – xo)(x – x1). . . (x – xn-1) ----- (18.7)

• For n’th-order polynomial, n + 1 data points are required:


[ xo, f(xo) ], [ x1,f(x1) ] . . . . [ xn, f(xn) ]

• Then, we used these data points and following equation’s to evaluate the
coefficients

b0 = f ( x 0 ) ----- 18.8

b1 = f [ x 1 , x 0 ] ----- 18.9
b 2 = f [ x 2 , x1 , x 0 ] ----- 18.10
M
b n = f [ x n , x n −1 , L , x 1 , x 0 ] ----- 18.11
• Where the bracketed [ ] function evaluations are finite
divided differences (FDD);
• For example, the first finite divided difference is
represented generally as;
f ( xi ) − f ( x j )
f [ xi , x j ] = ----- 18.12
xi − x j

Second finite divided difference


f [ xi , x j ] − f [ x j , x k ] ----- 18.13
f [ xi , x j , x k ] =
xi − x k

• Similarly, the n’th finite divided difference is


f [ xn , xn −1 L x1 ] − f [ xn −1 , xn − 2 L x0 ] ----- 18.14
f [ xn , xn −1 L x1 , x0 ] =
x n − x0
• These differences can be used to evaluate the coefficients
in equations (18.8) through (18.11), which can then
substituted into equation (18.7) to yield the interpolating
polynomial, called as Newton’s divided-difference
interpolating polynomial;

f n ( x ) = f ( x0 ) + ( x − x0 ) f [x1 , x0 ] + ( x − x0 )( x − x1 ) f [x2 , x1 , x0 ]
----- 18.15
+ .... + ( x − x0 )( x − x1 )...( x − xn −1 ) f [xn , xn −1 ,...., x0 ]
Example 10
From example 18.2, data points at x0=1, x1=4 and x2=6 were
used to estimate ln 2 with a parabola. Now adding a fourth
point (x3=5; f(x3)=1.609438), Estimate ln 2 with a third order
Newton’s interpolating Polynomial.

x0=1 x1=4 x2=6 x3=5


f(x0)=0 f(x1)=1.386294 f(x2)=1.791759 f(x3)=1.609438
Solution
The third-order polynomial with n = 3 is,

f3(x) = bo + b1(x – xo) + b2(x – xo)(x – x1) + b3(x – xo)(x – x1)(x-x2)

The first divided differences are (use eq 18.12):

f ( x1 ) − f ( x 0 ) 1 . 386294 − 0
f [ x1 , x 0 ] = = = 0 . 4620981
x1 − x 0 4 −1
f ( x2 ) − f ( x1 ) 1.791759 − 1.386294
f [ x2 , x1 ] = = = 0.2027326
x2 − x1 6−4
f ( x3 ) − f ( x2 ) 1.609438 − 1.386294
f [ x3 , x2 ] = = = 0.1823216
x3 − x2 5−6

The second divided differences (use equation 18.13):

f [ x2 , x1 ] − f [ x1 , x0 ] 0.2027326 − 0.4620981
f [ x2 , x1 , x0 ] = = = −0.05187311
x2 − x0 6 −1

f [ x3 , x2 ] − f [ x2 , x1 ] 0.1823216− 0.2027326
f [ x3 , x2 , x1 ] = = = −0.02041100
x3 − x1 5−4
The third divided differences:
f [ x3 , x2 , x1 ] − f [ x2 , x1 , x0 ] − 0.02041100 − ( −0.05187311)
f [ x3 , x2 , x1 , x0 ] = =
x3 − x0 5 −1
f [ x3 , x2 , x1 , x0 ] = 0.007865529
Finally, using eqs 18.8-18.11:
b0 = f ( x 0 ) = 0
b1 = f [ x 1 , x 0 ] = 0.4620981
b 2 = f [ x 2 , x 1, x 0 ] = −0.05187311
b 3 = f [x 3, x 2 , x 1 , x 0 ] = 0.007865529
Insert all values into eqs 18.7;

f 3 ( x ) = 0 + 0 .4620981 ( x − 1) − 0 .05187311 ( x − 1)( x − 4 )


+ 0 .007865529 ( x − 1)( x − 4 )( x − 6 )
f 3 ( 2 ) = 0 .6287686 , which represents a relative error of εt = 9.3%.
Quiz 6
Calculate f (4) with a third and fourth order Newton’s
interpolating polynomial.

x 1 2 3 5 6
f(x) 4.75 4 5.25 19.75 36
Solution
1st FDD;
f ( x1 ) − f ( x 0 ) 19 . 75 − 5 . 25
f [ x1 , x 0 ] = = = 7 . 25
x1 − x 0 5−3

f ( x2 ) − f ( x1 ) 4 − 19.75
f [ x2 , x1 ] = = = 5.25
x2 − x1 2−5

f ( x3 ) − f ( x2 ) 36 − 4
f [ x3 , x2 ] = = =8
x3 − x2 6−2
2nd FDD
f [ x2 , x1 ] − f [ x1 , x0 ] 5.25 − 7.25
f [ x2 , x1 , x0 ] = = =2
x2 − x0 2−3

f [ x3 , x2 ] − f [ x2 , x1 ] 8 − 5.25
f [ x3 , x2 , x1 ] = = = 2.75
x3 − x1 6−5

3rd FDD

f [ x3 , x2 , x1 ] − f [ x2 , x1 , x0 ] 2.75 − 2
f [ x3 , x2 , x1 , x0 ] = =
x3 − x 0 6−3
f [ x3 , x2 , x1 , x0 ] = 0.25
Finally,
b 0 = f ( x 0 ) = 5 . 25
b1 = f [x 1 , x 0 ] = 7 . 25
b 2 = f [x 2 , x 1 , x 0 ] = 2
b 3 = f [x 3 , x 2 , x 1 , x 0 ] = 0 . 25

f 3 ( x ) = 5 .25 + 7 .25 ( x − 3) + 2 ( x − 3)( x − 5 ) + 0 .25 ( x − 3)( x − 5 )( x − 2 )

f3(x) = bo + b1(x – xo) + b2(x – xo)(x – x1) + b3(x – xo)(x – x1)(x-x2)

∴ f 3 ( 4 ) = 10
Lagrange Interpolating Polynomials
• The Lagrange interpolating polynomial is simply a reformulation of the
Newton’s polynomial that avoids the computation of divided
differences:
n
fn ( x) = ∑ L ( x) f ( x )
i=0
i i ----- 18.20
n x − xj
where Li ( x ) = ∏
j=0 xi − x j
----- 18.21
j≠i

• Where Π designates the “product of”. For example the linear version
(n =1) is

x − x1 x − x0
f1 ( x ) = f ( x0 ) + f ( x1 ) ----- 18.22
x 0 − x1 x1 − x 0
• And the second-order version (n=2) is

f2 (x) =
(x − x1 )(x − x 2 ) f ( x ) + (x − x 0 )( x − x 2 ) f ( x )
(x 0 − x1 )(x 0 − x 2 ) 0
(x1 − x 0 )(x1 − x 2 ) 1
+
( x − x 0 )( x − x1 )
f ( x2 )
(x 2 − x 0 )( x 2 − x 1 ) ----- 18.23

• For n=3

f 2 ( x) =
( x − x1 )( x − x2 )( x − x3 )
f ( x0 ) +
( x − x0 )(x − x2 )(x − x3 )
f ( x1 )
(x0 − x1 )(x0 − x2 )(x0 − x3 ) (x1 − x0 )(x1 − x2 )(x1 − x3 )
+
(x − x0 )(x − x1 )(x − x3 ) f ( x ) + (x − x0 )(x − x1 )(x − x2 ) f ( x )
(x2 − x0 )(x2 − x1 )(x2 − x3 ) 2 (x3 − x0 )(x3 − x1 )(x3 − x2 ) 3
Example 11
Use a Lagrange interpolating polynomial of the first and
second order to evaluate ln 2 based on the data given.
x0 = 1 f(x0) = 0
x1 = 4 f(x1) = 1.386294
x2 = 6 f(x2) = 1.791760
Solution:
First-order polynomial at x = 2, use eq. 18.22;
2−4 2 −1
f1 ( 2 ) = (0) + (1 .386294 ) = 0 .4620981
1− 4 4 −1
Second-order polynomial at x = 2, use eq 18.23;
( 2 − 4 )( 2 − 6 ) ( 2 − 1)( 2 − 6 )
f2 (x) = (0) + (1 . 386294 ) +
(1 − 4 )( 1 − 6 ) ( 4 − 1)( 4 − 6 )
( 2 − 1)( 2 − 4 )
(1 . 791760 ) = 0 . 5658444
( 6 − 1)( 6 − 4 )
Working with
your buddy to do Quiz 7
Quiz 7
Given the data, calculate f(4) using the Lagrange
polynomials of order 1 to 2.

x 1 2 3 5 6
f(x) 4.75 4 5.25 19.75 36
Chapter 5
Lets do past year
questions

Q1
Q3
Q6
Q8
Assignment (please use excell)
Given the data below, use least squares regression to fit a) a
straight line b) a power equation c) a saturation-growth-rate
equation and d) a parabola. Find the r2 value and justify
which is the best equation to represent the data.

x 5 10 15 20 25 30 35 40 45 50

y 17 24 31 33 37 37 40 40 42 51
Lets do past year question - Q3 and Q6

Develop a MATHLAB program which can be


written in a command window to solve this
problem which will give the linear regression
equation
Lets try mathlab to solve linear
regression problem

T(oC) 4 8 12 16 20 24 28

V (10-2cm2/s) 1.88 1.67 1.49 1.34 1.22 1.11 1.02


Question?
THE END

Thank You for the Attention

You might also like