Download as pdf or txt
Download as pdf or txt
You are on page 1of 35

8.

Curve Fitting
8.1 Introduction
 Purpose: To represent a function based on its knowledge of
behaviour at certain discrete points.
 There are two approaches:
Interpolation  produces a function that matches the
given data exactly
Regression  produces an approximate function for the
given data
 Application:
Trend Analysis
Hypothesis testing
 Source of Data:
Experimental data
Discrete numerical solution

57
8. Curve Fitting

Interpolation (polynomial) Regression (linear)

58
8. Curve Fitting
Comparison between Interpolation & Regression

Interpolation Regression
 The data is very precise  The data contains error or noise
 Curve fits each points exactly  Curve does not necessarily fit
 Example: relationship generates each points exactly but
from exact solution/calculation represents the general trend of
the data
 Example: experimental result

59
8.1 Interpolation
8.1.1 Newton’s Divided-Difference
 Making use of a Polynomial form to fit a curve  Polynomial
interpolation:

 It requires (n + 1) datapoints for n-th order polynomial

 While Polynomial is the form of fitting, one of the methods is


Newton’s Divided-difference.

60
8.1 Interpolation
 The simplest method of Newton’s DD is by using Linear
form  Linear Interpolation

f1  x   f  x0  f  x1   f  x0 
 Finite divided difference
x  x0 x1  x0 approximation of
the first derivative
f  x1   f  x0 
 f1  x   f  x0   x  x0 
x1  x0

 Datapoints: [x0, f(x0)], [x1, f(x1)]

61
8.1 Interpolation
 General form of Newton’s Interpolating Polynomials
f n  x   b0  b1  x  x 0   K  bn  x  x 0  x  x1 K  x  x n 1 
b0  f  x 0 
b1  f x1 , x 0 
b2  f x 2 , x1 , x 0 
M
bn  f x n , x n 1 , K , x1 , x 0 
where
f  xi   f x j 
 
f xi , x j 
xi  x j
f x n , x n 1 , K , x1   f x n 1 , K , x1 , x 0 
f x n , x n 1 , K , x1 , x 0  
x n  x0

62
8.1 Interpolation
 Recursive nature of Newton’s divided differences:

 f n  x   f  x 0   f x1 , x 0  x  x 0   f x 2 , x1 , x 0  x  x1  x  x 0 
 K  f x n , x n 1 , K , x1 , x 0  x  x 0  x  x1 K  x  x n 1 

63
8.1 Interpolation
 Algorithm

f(xi) = yi

f  xi   f x j 
f xi , x j  
xi  x j

 First stage

-Second stage
-yint2  f2(x)
-Error

64
8.1 Interpolation
 Effects on higher order (Ex 18.3):
 Ln 2 = 0.6931472 (true value)

order Data points f(x) et (%)


required
1st (linear) 2 0.4621 33.3

2nd (quadratic) 3 0.5658 18.4

3rd (cubic) 4 0.6288 9.3

65
8.1 Interpolation
8.1.3 Lagrange
 Form n n
x  xj
f n  x    Li  x  f  xi ; Li  x   
i 0 j 0 xi  x j
j i

 Rationale:
 Each term of Li(xi) is 1 at x = xi and zero at all other
datapoints  takes the value of f(xi) at data point xi.
 Therefore, the curve passes through each data points 
a characteristic of Interpolation

66
8.1 Interpolation
 Example: Lagrange Interpolating polynomial orde-3
3
f 3  x    Li  x  f  xi   L0 f  x0   L1 f  x1   L2 f  x2   L3 f  x3 
i 0

 x  x1 x  x2 x  x3   x  x0 x  x2 x  x3 
f 3  x     f  x0     f  x1   K
 x0  x1 x0  x2 x0  x3   x1  x0 x1  x2 x1  x3 
 x  x0 x  x1 x  x3   x  x0 x  x1 x  x2 
K    f  x2     f  x3 
 x2  x0 x2  x1 x2  x3   x3  x0 x3  x1 x3  x2 

67
8.1 Interpolation
 Lagrange Interpolating polynomial Pseudocode

68
8.1 Interpolation

 Homework:
 Use data in Prob. 18.5
 Calculate f(2.8) using Newton’s interpolating polynomials
of order 1 through 3. Choose the sequence of the points
for your estimates to attain the best possible accuracy.
 Calculate f(2.8) using Lagrange polynomials of order
1 through 3
 Compare these methods

69
8.1 Interpolation
8.1.6. Spline Interpolation
Segmen III Principles:
Segmen II
... 1. At first and end points, the
Segmen I
value of functions =
1 datapoints
1
f 1  x0   y0
f m  xn   y n
2 2. At interior nots, the value
of adjacent functions must
be equal
f j
xi   f j 1
xi 1 
f j
xi  jth Segment
ith data

73
8.1 Interpolation
8.1.6. Spline Interpolation
Segmen III Principles:
Segmen II
Segmen I ... 3. At interior nots, slope of
adjacent functions must be
equal
4
d d
f j
xi   f j 1
xi 1 
dx dx
4. Assume, the first point has
3 zero second derivative:

d2 I
2
f x0   0
dx
f j
xi  jth Segment
ith data

74
8.1 Interpolation
8.1.6. Spline Interpolation
Example: Quadratic Splines (Text Book EX. 18.9):

 Quadratic: f j
xi   a j xi 2  b j xi  c j
 Datapoints:
 (3.0,2.5)
 (4.5,1.0)
 (7.0,2.5)
 (9.0,0.5)

75
8.1 Interpolation
8.1.6. Spline Interpolation
Example: Quadratic Splines (Text Book EX. 18.9):

 Quadratic: f j
xi   a j xi 2  b j xi  c j
 Datapoints: (3.0,2.5); (4.5,1.0); (7.0,2.5);(9.0,0.5)

 Principles:
9a1  3b1  c1  2.5 20.25a1  45b1  c1  1.0
1 2
81a1  9b1  c1  0.5 20.25a2  45b2  c2  1.0
49a2  7b2  c2  2.5
49a3  7b3  c3  2.5
9a1  b1  9a2  b2  0
3
14a2  b2  14a3  b3  0 4 a1  0
76
8.1 Interpolation
8.1.6. Spline Interpolation
Example: Quadratic Splines (Text Book EX. 18.9):

 In Matrix form:  a1   
 ...     
   b1   
  c1  M 
 M O M  a    M 
  2   
 ...  M   
    
 c3   
 Solve for: a1, b1, c1, a2, ..., c3
f I  x    x  5.5 3.0  x  4.5
f II  x   0.64 x 2  6.76 x  18.46 4.5  x  7.0
f III  x   1.6 x 2  24.6 x  91.3 7.0  x  9.0
77
8.3 Regression
Review on Statistics
 Consider a population of xi containing n members.
 Mean/average:
1 n
y   yi
n i 1
 Standard deviation:

In a normalised form 
Coefficient of variance
n

 iy  y 2
sy
sy  i 1 C OV 
n 1 y
atau

 yi   yi 
2 2

sy 
2

n n  1

79
8.3 Regression

Normal distribution
https://www.youtube.com/shorts/Vo9Esp1yaC8

80
8.3 Regression
 If the population is composed of a number of
measurements, or numerical calculation:
 Mean  Accuracy (the closeness to the true value)
 Standard deviation  Precision (the closeness among
other members)
 Regression makes use of this principle since, often, the
population is scattered due to random error (from
calculation or measurement)
 Like in interpolation, the representing curve can be of the
form of linear, quadratic of general polynomial.
 The big issue here is to choose the criteria for the best fit

81
8.3 Regression
The Best-fit Criterion
 Supposed n data yi, to be fitted linearly (a0 + a1xi), so that containing
error/residual, e that is defined by:

ei  yi  a0  a1 xi
 The proposed criteria for Best fit :
 Minimise the sum of errors

min  ei
 Minimise the sum of the absolute values of error
min  ei
 Minimise the maximum of the error (Minimax)

minmax  ei 
 Minimise the sum of the squares of errors

min  ei
2

82
8.3 Regression

Alternatives of The Best-fit Criterion:

min  ei

min  ei

minmax  ei 

83
8.3 Regression
 The ‘best’ proposed criterion is chosen to be the Least
Squares  Widely used in even more sophisticated
regressions and will be used for the entire discussion
S r    yi ,measured  yi , fit mod el 
2

 yi  a0  a1 xi 

84
8.3 Regression
Quantification of Error of the Fitting
 The error  Sum of squares (SSE) Sr

Sr    yi  f  xi 
2

 If the curve fitting use first order (linear), then:

f1  xi   a0  a1 xi
 The regression is derived by taking the Minimization of the
SSE, Sr:
S r    yi  a 0  a1 xi 
2

S r S r
0 and 0
a 0 a1

85
8.3 Regression
Minimization
 For a0 : For a1 :
S r    yi  a0  a1 xi  S r    yi  a0  a1 xi 
2 2

S r Sr
 2   yi  a 0  a1 xi   0  2  yi  a0  a1 xi xi   0
a0 a1
  yi   a0   a1 xi  0   yi xi   a0 xi   a1 xi  0
2

 y i  na 0  a1  xi  2

 a0  xi   a1  xi   yi xi

 Solving for the two equations:


n  xi yi   xi  yi
a1 
n  xi2   xi 
2

a 0  y  a1 x
 Example 17.1 and 17.3

86
8.3 Regression

 Other form: normalisation of SSE  R-square (r2):


St  S r
r2  S t    yi  y 
2

St

 This represents the relative discrepancy between squares


of errors and the spread of the original data, yi .
 By taking square root  Correlation coefficient (r).
For a perfect fit  r = 1, for total non-correlated  r = 0.

87
8.3 Regression
 Use common sense to fit data, not necessarily linear. It could be
parabolic, exponential, logarithmic, etc…
 Linearization if necessary !
 Know the basic relationship !
 Modify the equation into linear relationship

y  1e 1x  ln y   ln 1   1 x


y '  a0  a1 x

88
Linearization:

Example 17.4

89
8.3 Regression
* Favourite Text Book Problems:
17.6, compare the results with that of Ms Excel
17.13, compare the results with that of Ms Excel

90
8.3 Regression
Polynomial Regression (Least Squares)
 The minimisation of Sum of squares of errors (Least
Squares):
S r    yi  a0  a1 xi  a 2 xi  L  a m xi
2 m 2

 That is by differentiate Sr wrt each a0, a1, … am:
Sr Sr Sr
 0;  0,L, 0
a0 a1 am
 Solving for the system of equations  a0, a1, … am
 n x i x 2
i L x   a 0    yi 
m
i
    
 x i x 2
i x i
3
L x i
m 1
a
  
1  x i y i 
 x 2
i x i
3
x 4
i L x m2
i
  
  a2     xi yi 
2

 
 M M M O M  M   M 
   

 x m
i x m 1
i x m2
i L  xi  am   xi yi 
2m  m

Fig. 17.12 and 17.13  algorithm and pseudocode to assemble the matrix
91
8.3 Regression
n-Dimensional Space Regression
 So far, we have 1 variable to regulate 1 function:
Example: x  f (x)
 Consider a problem with more than 1 variables to regulate
1 response function
 Application: A phenomenon that is regulated by more than
one input variables.
 The simplest form is Linear relationship having 2 input
variables (Multiple Linear Regression):

y  a 0  a1 x1  a 2 x2
 So, the ‘line’ in the two-dimensional space becomes ‘flat-
plane’ in the three-dimensional space.

92
8.3 Regression

Two-dimensional linear regression

93
8.3 Regression
 The problem solving remains the same…
 Express the sum of the squares of errors, Sr
 Differentiate Sr wrt each coefficients, a0, a1, and a2.
 Build a system of equations in matrix form
 Solve it …
 n

x 1i x 2i   a0 
 
  yi 
 
  x1i x x x
1i 2 i   a1     1i i 
2
1i x y
  x2 i x x
1i 2i x 2
 
2i  a2 
 x y
 2i i 

94
8.3 Regression
Advanced Topics
 Non-linear regression
 Fourier Approximation
 Response Surface modelling
 Metamodelling/Neural Network

95

You might also like