Curve Fitting Update1

You might also like

Download as pdf or txt
Download as pdf or txt
You are on page 1of 48

Solutions of Systems of Linear Algebraic Equations

Henok Mekonnen

April 16, 2024

Henok Mekonnen Solutions of Systems of Linear Algebraic Equations April 16, 2024 1 / 48
Summary

1 Introduction

2 Statistical background

3 Least Square Regression


Linear Regression
Linearization of non-linear relationships
Polynomial Regression
Multiple Linear Regression
General Least Square
Non-linear Regression

Henok Mekonnen Solutions of Systems of Linear Algebraic Equations April 16, 2024 2 / 48
Summary

4 Interpolation
Overview
Newton’s Divided Difference Interpolation
Linear Interpolation
Quadratic Interpolation
General Polynomial Interpolation
Error Estimation
Lagrange’s Interpolation
Spline Interpolation
Multi-dimensional interpolation

5 Fourier Approximation
Periodic Functions and Sinusoidals
Curve fitting with Sinusoidals
Continuous Fourier Series

Henok Mekonnen Solutions of Systems of Linear Algebraic Equations April 16, 2024 3 / 48
Introduction

Engineering data usually are available in discrete form instead of in a


continuum. For example,
Experimental data
Survey data
Historical data

What is Curve Fitting?


Discrete data represents data on finite number of points and is limited to
those points only. Engineering problems often extend to regions where the
data are not available. The extended problem region can be addressed by
representing approximating different regions. This technique is called curve
fitting

Henok Mekonnen Solutions of Systems of Linear Algebraic Equations April 16, 2024 4 / 48
Introduction

There are two kinds of curve-fitting solutions


The first one aims at deriving a generalized equation or a curve that
closely connects a set of data points, not necessarily passing through
any of the points. This one is called regression.
In another approach, the data points are known to be exact/accurate,
and missing data, usually between two available data points is
evaluated using interpolation.

Henok Mekonnen Solutions of Systems of Linear Algebraic Equations April 16, 2024 5 / 48
Statistical background

Arithmetic mean (y ) is the sum of each data point values (yi ) divided
by the number of data points (n),
P
yi
y= (1)
n
Standard deviation about the mean, Sy is a common measure of the
spread or deviation of the data points around the arithmetic mean,
r
St
Sy = (2)
n−1
where St is the total sum of the squares of the residuals between the
data points and the mean,
X
St = (yi − y )2 (3)

Henok Mekonnen Solutions of Systems of Linear Algebraic Equations April 16, 2024 6 / 48
Statistical Background
The variance is another measure of the deviation of the data points
from the mean, which is the square of the standard deviation
(yi − y )2
P
2
Sy = (4)
n−1
The above equation can be written more conveniently without
evaluating the mean
P 2
2 (yi − yi y + (y )2 )
Sy = (5)
n−1
but
X (y1 + y2 + ... + yn ) (y1 + y2 + ... + yn )
yi y = y1 + y2 + ...
n n P
(y1 + y2 + ... + yn ) ( y i )2
+yn = and (6)
n Xn
y2 = 0

Henok Mekonnen Solutions of Systems of Linear Algebraic Equations April 16, 2024 7 / 48
Statistical Background

Thus,
yi2 − ( yi )2 /n
P P
Sy2 = (7)
n−1

Normal distribution
All kinds of variables in natural and social sciences are normally or
approximately normally distributed.

Henok Mekonnen Solutions of Systems of Linear Algebraic Equations April 16, 2024 8 / 48
Linear Regression
This is the simplest one of the least square regression techniques
data points are approximated by a linear expression or straight line
graphically
y = a0 + a1 x + e (8)

Residual
In the above linear regression equation, the term e is the residual which is
the difference between the value of the data point and the calculated value
corresponding to the data point.

Best fit criteria


How do we determine the regression function best fits the multiple data
points? One common simple criterion is to minimize the sum of the
residuals.
X n n
X
|ei | = |(yi − a0 − a1 xi )| (9)
i=1 i=1

Henok Mekonnen Solutions of Systems of Linear Algebraic Equations April 16, 2024 9 / 48
Linear Regression
The above criterion has its shortcomings, for example, in that it
doesn’t give a unique solution. A better expression of best-fit criteria
can be given as:
n
X n
X X
Sr = ei2 = (yi,measured −yi,modeled )2 = (yi −a0 −a1 xi )2 (10)
i=1 i=1
Let’s differentiate the total residual with respect to each coefficient.
∂Sr
= −2(yi − a0 − a1 xi )
∂a0
(11)
∂Sr
= −2[(yi − a0 − a1 xi )xi ]
∂a1
Minimizing the derivatives will minimize the total residual (Sr )
X X X
0= yi − a0 − a1 xi
X X X (12)
0= yi xi − a0 xi − a1 xi2

Henok Mekonnen Solutions of Systems of Linear Algebraic Equations April 16, 2024 10 / 48
Linear Regression

Rewriting the above equations for i = 1 to n


X X 
yi = na0 + xi a1
Also dividing each term by n, y = a0 + a1 x (13)
X X  X 
yi xi = a0 xi + a1 xi2

Solving the above equations simultaneously, we can evaluate a1


P P P
n xi yi − xi yi
a1 =
n xi2 − ( xi )2
P P
(14)
a0 = y − a1 x

x and y are the arithmetic mean values of x and y , respectively.This


solution is called the least-square regression solution.

Henok Mekonnen Solutions of Systems of Linear Algebraic Equations April 16, 2024 11 / 48
Linear Regression

Goodness of fit
The distribution of the actual data points around the regression line
can show how good the fitting is. This can be defined as the standard
deviation around the regression line, instead of the mean value.
r
Sr
sy /x = (15)
n−2
The n − 2 is chosen instead of n because connecting two points
should be meaningless in terms of standard deviation, and n = 2
makes the denominator of the above expression zero.

Henok Mekonnen Solutions of Systems of Linear Algebraic Equations April 16, 2024 12 / 48
Linearization of non-linear relationships
Not all data can be approximated by a linear regression.
There are some data whose non-linear relationship can be converted
to a linear relationship.
One notable case is converting an exponential model into a linear
logarithmic one.

y = α1 e β1 x
(16)
ln y = ln α1 + β1 x

Another example is when the inverse of an expression gives a linear


expression
x
y = α3
β3 + x
(17)
1 1 β3 1
= +
y α3 α3 x

Henok Mekonnen Solutions of Systems of Linear Algebraic Equations April 16, 2024 13 / 48
Polynomial Regression
In a similar pattern a polynomial regression equation can be written as
y = a0 + a1 x + a2 x 2 + ... + an x n + e (18)
The sum of the squares of the residuals is
n
X
Sr = yi − a0 − a1 xi − a2 xi2 (19)
i=1

Again to minimize the total sum of the square of the residuals, we


differential Sr with respect to each coefficient and equate it to zero.
∂Sr X
= −2 yi − a0 − a1 xi + a2 xi2
∂a0
∂Sr X
= −2 (yi − a0 − a1 xi + a2 xi2 )xi (20)
∂a1
∂Sr X
= −2 (yi − a0 − a1 xi + a2 xi2 )xi2
∂a2

Henok Mekonnen Solutions of Systems of Linear Algebraic Equations April 16, 2024 14 / 48
Polynomial Regression

Evaluating the derivatives to zero


X X X
yi = na0 + a1 x i + a2 xi2
X X X X
yi xi = a0 xi + a1 xi2 + a2 xi3 (21)
X X X X
yi xi2 = a0 xi2 + a1 xi3 + a2 xi4

Putting the set of equations in matrix form


 P P 2    P 
Pn P x2i P xi3 a0 P yi
 xi
P 2 P xi3 x  a1  =  yi xi 
P i4 P 2
xi xi xi a2 yi xi

Henok Mekonnen Solutions of Systems of Linear Algebraic Equations April 16, 2024 15 / 48
Polynomial Regression
The above second-order polynomial equation can be extended to a
general m − th order polynomial as:
Putting the set of equations in matrix form
 P P 2 P m    P 
n xi x ··· xi a0 yi
P P 2
P i3 P m+1 P
 xi x i xi ··· xi  a1   yi xi 
  ..  =  .. 
    
 .. .. .. .. ..
 . . . . .  .   . 
P m P m+1 P m+2 P 2m P m
xi xi xi ··· xi an yi xi

The goodness of fit or the standard error is


s
S
sy /x = (22)
n − (m + 1)

Where n is the number of data points and m is the order of


polynomial of the regression equation
Henok Mekonnen Solutions of Systems of Linear Algebraic Equations April 16, 2024 16 / 48
Multiple Linear Regression
In many practical cases, the dependent variable is often a function of
more than one parameter.
y = a0 + a1 x1 + a2 x2 + e (23)
The sum of the squares of the residuals is
n
X
Sr = (yi − a0 − a1 x1i − a2 x2i )2 (24)
i=1

The derivatives
∂Sr X
= −2 yi − a0 − a1 x1i − a2 x2i
∂a0
∂Sr X
= −2 (yi − a0 − a1 x1i + a2 x2i )x1i (25)
∂a1
∂Sr X
= −2 (yi − a0 − a1 x1i + a2 x2i )x2i
∂a2
Henok Mekonnen Solutions of Systems of Linear Algebraic Equations April 16, 2024 17 / 48
Multiple Linear Regression

Equating the derivatives to zero to minimize the residuals and putting


the set of expressions in matrix form
 P P    P 
Pn P x1i2 P x2i a0 P yi
 x1i
P x1i Px1i 2x2i
 a1  =  x1i yi 
P P
x2i x1i x2i x2i a2 x2i yi

For m number of independent variables the standard error is


s
S
sy /x = (26)
n − (m + 1)

The significance of the multiple linear regression extends to the


polynomial regressions of the general form

y = a0 x1a1 x2a2 ...xm


am
(27)

Henok Mekonnen Solutions of Systems of Linear Algebraic Equations April 16, 2024 18 / 48
Multiple Linear Regression

The above polynomial equation can be linearized as

log y = log a0 + a1 log x1 + a2 log x2 + ... + am log m (28)

A generalized expression of the multiple linear regression for data of


m independent variables and n number of data points
 P P P    P 
n x1i x2i · · · P xmi a0 yi
x1i2
P P P P
Px1i 2x2i · · · P x1i xmi 
 x1i  a1   x1i yi 
P P    
 x2i x x x · · · x x  a2   P x2i yi 
 1i 2i 2i 2i mi    = 
 .. .. .. .. ..   ..   .. 
 . . . . .  .   . 
P P P P 2 P
xmi x1i xmi x2i xmi · · · xmi an xmi yi

Henok Mekonnen Solutions of Systems of Linear Algebraic Equations April 16, 2024 19 / 48
Multiple Linear Regression

The coefficient matrix above expression can be obtained in a more


simplified way for a computer algorithm, which is the element-wise
summation of:
 
1
n  x1i 
X  
 x2i  1 x1i x2i ... xmi
 
i=1  ... 
xmi

Henok Mekonnen Solutions of Systems of Linear Algebraic Equations April 16, 2024 20 / 48
General Least Square
The general matrix expression for multiple linear regression equation
will have profound use
y = a0 z0 + a1 z1 + a2 z2 + ... + am zm + e (29)
This general expression can also encompass polynomials as long as
the terms z0 , z1 , ..., zm terms represent monomial functions.
z0 = x 0 = 1, z1 = x, z2 = x 2 , z3 = x 3 , ... zm = x m (30)
For multiple data points
yi = a0 z0i + a1 z1i + a2 z2i + ... + am zmi + ei (31)
The sum of the square of the residuals is
Xn X
Sr = ei2 = (yi − a0 z0i − a1 z1i − a2 z2i − ... − am zmi )2
i=1
n m (32)
X X
2
Sr = (yi − aj zji )
i=1 j=0
Henok Mekonnen Solutions of Systems of Linear Algebraic Equations April 16, 2024 21 / 48
General Least Square

m is the number of parameters or independent variables involved and


n is the number of data points
Differentiating Sr with respect to the coefficients
∂Sr X
= −2 (yi − a0 z0i − a1 z1i − a2 z2i − ... − am zmi )z0i
∂a0
∂Sr X
= −2 (yi − a0 z0i − a1 z1i − a2 z2i − ... − am zmi )z1i
∂a1
∂Sr X
(33)
= −2 (yi − a0 z0i − a1 z1i − a2 z2i − ... − am zmi )z2i
∂a2
..
.
∂Sr X
= −2 (yi − a0 z0i − a1 z1i − a2 z2i − ... − am zmi )zmi
∂am

Henok Mekonnen Solutions of Systems of Linear Algebraic Equations April 16, 2024 22 / 48
General Least Square
Evaluating all the derivatives to zero to minimize the residuals
X X X X X
a0 z0i2 + a1 z0i z1i + a2 z0i z2i + ... + am z0i zmi = z0i yi
X X X X X
a0 z0i z1i + a1 z1i2 + a2 z1i z2i + ... + am z1i zmi = z1i yi
X X X X X
a0 z0i z2i + a1 z1i z2i + a2 z2i2 + ... + am z2i zmi = z2i yi
..
.
X X X X X
2
a0 z0i zmi + a1 z1i zmi + a2 z2i zmi + ... + am zmi = zmi yi
(34)
In matrix form
 P 2 P P P    P 
P z0i Pz0i 2z1i P z0i z2i ... P z0i zmi a0 P z0i yi
 z0i z1i z1i Pz1i 2z2i ... P z1i zmi   a1   P z1i yi 
   
P
 z0i z2i P z1i z2i z2i ... z2i zmi 
  a2  =  z2i yi 
   

 .. ..   ..   .. 
.
P .
  .   . 
P P P 2 P
z0i zmi z1i zmi z2i zmi ... zmi am zmi yi
Henok Mekonnen Solutions of Systems of Linear Algebraic Equations April 16, 2024 23 / 48
General Least Square

You can notice that the coefficient matrix is simply the product of the
vector of coefficients [Z ] and its transpose [Z ]T . Therefore, the
general least square regression in condensed matrix form is:

[[Z ]T [Z ]]{A} = [Z ]T {Y } (35)

The vector of the unknown coefficients

{A} = [[Z ]T [Z ]]−1 {[Z ]T {Y }} (36)

Henok Mekonnen Solutions of Systems of Linear Algebraic Equations April 16, 2024 24 / 48
Non-linear Regression

Some data do not fit with the general linear least square regression
curves.
f (x) = a0 (1 − e −a1 x ) + e (37)
In this case, the non-linear equation can be approximated by Taylor
series into a linear function. This method is called the Gauss-Newton
method
yi = f (xi , a0 , a1 , ..., am ) + ei (38)
yi = f (xi ) + ei (39)
∂f (xi )j ∂f (xi )j
f (xi )j+i = f (xi )j + ∆a0 + ∆a1 (40)
∂a0 ∂a1
∂f (xi )j ∂f (xi )j
yi − f (xi )j = ∆a0 + ∆a1 + ei (41)
∂a0 ∂a1
where j represents the iterative steps and i denotes the data points

Henok Mekonnen Solutions of Systems of Linear Algebraic Equations April 16, 2024 25 / 48
Non-linear Regression

 
In matrix form y1 − f (x1 )
y2 − f (x2 )
   
D = .. 
 . 
{D} = [Zj ]{∆A} + {E } (42) yn − f (xn )
 
∆a0
    ∆a1 

∆A =  . 
 ∂f1 ∂f1   .. 
∂a0 ∂a1
∂f2 ∂f2  ∆am
 
 ∂a0 ∂a1 
Zj  . .. 
 .. .  [[Zj ]T [Zj ]]{∆A} = {[Zj ]T {D}}
∂fn ∂fn (43)
∂a0 ∂a1

Henok Mekonnen Solutions of Systems of Linear Algebraic Equations April 16, 2024 26 / 48
Non-linear Regression

The method employs initial guess values of ak ’s and iteratively


evaluate for ak ’s

a0,j+1 = a0,j + ∆a0


(44)
a1,j+1 = a1,j + ∆a1

Convergence criteria

ak,j+1 − ak,j
|εa |k = 100% (45)
ak,j+1

Henok Mekonnen Solutions of Systems of Linear Algebraic Equations April 16, 2024 27 / 48
Interpolation: Overview

Unlike regression, an interpolated curve passes exactly through the


known data points. As a result, for n + 1 data points, there is only
one polynomial of order n which joins the data points exactly.
Thus, the polynomial equation of the interpolation function is given
by a general polynomial expression as:

f (x) = a0 + a1 x + a2 x 2 + ... + an x n (46)

Two approaches are employed to derive the polynomial interpolation


equations:
Newton Polynomials
Lagrange Polynomials

Henok Mekonnen Solutions of Systems of Linear Algebraic Equations April 16, 2024 28 / 48
Newton’s Divided Difference Interpolation: Linear
Interpolation
Linear Interpolation: The interpolation equation is derived by
equating the slope.
f1 (x) − f (x0 ) f (x1 ) − f (x0 )
= = slope (47)
x − x0 x1 − x0

Figure: Linear Interpolation


Henok Mekonnen Solutions of Systems of Linear Algebraic Equations April 16, 2024 29 / 48
Newton’s Divided Difference Interpolation: Quadratic
Interpolation
In this case we define a second-order (quadratic) polynomial which is
a kind of general polynomial expression tailored for the interpolation
function derivation.
f2 (x) = b0 + b1 (x − x0 ) + b2 (x − x0 )(x − x1 ) (48)
Expanding the above expression
f2 (x) = (b0 − b1 x0 + b2 x0 x1 ) + (b1 − b2 x0 − b2 x1 )x + b2 x 2 (49)
Now the polynomial function can take the general expression for an
order of 2
f2 (x) = a0 + a1 x + a2 x 2 (50)
where
a0 = b0 − b1 x0 + b2 x0 x1
a1 = b1 − b2 x0 − b2 x1 (51)
a2 = b2
Henok Mekonnen Solutions of Systems of Linear Algebraic Equations April 16, 2024 30 / 48
Newton’s Divided Difference Interpolation: Quadratic
Interpolation
Now let’s evaluate the coefficients. At x = x0
f (x0 ) = b0 + b1 (x0 − x0 ) + b2 (x0 − x0 )(x0 − x1 )
(52)
b0 = f (x0 )
Evaluate the function at x = x1
f (x1 ) = b0 + b1 (x1 − x0 ) + b2 (x1 − x0 )(x1 − x1 )
f (x1 ) − b0 f (x1 ) − f (x0 ) (53)
b1 = =
x1 − x0 x1 − x0
Evaluate the function at x = x2
f (x2 ) = b0 + b1 (x2 − x0 ) + b2 (x2 − x0 )(x2 − x1 )
f (x2 )−f (x1 ) f (x1 )−f (x0 ) (54)
x2 −x1 − x1 −x0
b2 =
x2 − x0
Henok Mekonnen Solutions of Systems of Linear Algebraic Equations April 16, 2024 31 / 48
Newton’s Divided Difference Interpolation: General
Polynomial Interpolation

The general function for the coefficients is expressed in terms of the


finite divided differences
b0 = f (x0 )
b1 = f [x1 , x0 ]
b2 = f [x2 , x1 , x0 ] (55)
..
.
bn = f [xn , xn−1 , ..., x1 , x0 ]

Henok Mekonnen Solutions of Systems of Linear Algebraic Equations April 16, 2024 32 / 48
Newton’s Divided Difference Interpolation: General
Polynomial Interpolation

The finite divided differences


f (xi ) − f (xj )
f [xi , xj ] =
xi − xj
f [xi , xj ] − f [xj , xk ]
f [xi , xj , xk ] =
xi − xk (56)
..
.
f [xn , xn−1 , ..., x1 ] − f [xn−1 , xn−2 , ..., x0 ]
f [xn , xn−1 , ..., x1 , x0 ] =
xn − x0

Henok Mekonnen Solutions of Systems of Linear Algebraic Equations April 16, 2024 33 / 48
Newton’s Divided Difference Interpolation: Error
Estimation
We use an analogy with the Taylor series approximation. The
truncation error can be given by:

f (n+1) (ξ)
Rn = (xi+1 − xi )n+1 (57)
(n + 1)!
where ξ is somewhere in the interval xi to xi+1
For an nth order interpolating polynomial, an analogous relationship is

f (n+1) (ξ)
Rn = (x − x0 )(x − x1 )...(x − xn ) (58)
(n + 1)!
where ξ is somewhere in the interval between the unknown and the
data
This formula requires that it should be differentiable to be used. This
is not usually the case
Henok Mekonnen Solutions of Systems of Linear Algebraic Equations April 16, 2024 34 / 48
Newton’s Divided Difference Interpolation: Error
Estimation

We can approximate the error function with the finite divide difference

Rn = f [x, xn , xn−1 , ..., x0 ](x − x0 )(x − x1 )...(x − xn ) (59)

Again the above equation is not solvable since it contains the


unknown variable x. So, additional known point xn+1 is required, so

Rn ≈ f [xn+1 , xn , xn−1 , ..., x0 ](x − x0 )(x − x1 )...(x − xn ) (60)

Henok Mekonnen Solutions of Systems of Linear Algebraic Equations April 16, 2024 35 / 48
Lagrange’s Interpolation
The Newton interpolation polynomial could have been evaluated as:
x − x1 x − x0
f1 (x) = f (x0 ) + f (x1 ) (61)
x0 − x1 x1 − x0
(x − x1 )(x − x2 ) (x − x0 )(x − x2 )
f2 (x) = f (x0 ) + f (x1 )
(x0 − x1 )(x0 − x2 ) (x1 − x0 )(x1 − x2 )
(62)
(x − x0 )(x − x1 )
+ f (x2 )
(x2 − x0 )(x2 − x1 )
The general form is
n
X
fn (x) = Li (x)f (xi ) (63)
i=0

where
n
Y x − xj
Li (x) = (64)
xi − xj
j=0
j̸=1

Henok Mekonnen Solutions of Systems of Linear Algebraic Equations April 16, 2024 36 / 48
Largange’s Interpolation

The error definition is similar to that of Newton’s interpolation


n
Y
Rn = f [x, xn , xn−1 , ..., x0 ] (x − xi ) (65)
i=0

Now you can imagine that for the conventional general polynomial
expression of order n, where there are n + 1 known data points, it is
possible to solve the system of equations simultaneously with the
methods discussed in chapter three.
But as the order of polynomials increases, the system of equations
becomes increasingly ill-conditioned and can also yield erroneous
results.
Therefore, as long as interpolation is of interest, go for the techniques
discussed here (Newoton’s or Lagrange’s interpolation polynomials)

Henok Mekonnen Solutions of Systems of Linear Algebraic Equations April 16, 2024 37 / 48
Spline Interpolation
The spline interpolation gives a visually smooth curve yet it defines
the interpolation functions for each interval between two data points.
It should satisfy two conditions
The interpolation equations of adjacent intervals should give the same
value at the intersection point
The slopes of the equations of the adjacent intervals should be equal
for a smooth transition

Figure: Spline Interpolation


Henok Mekonnen Solutions of Systems of Linear Algebraic Equations April 16, 2024 38 / 48
Spline Interpolation
Let’s take a quadratic spline interpolation, as depicted in the above
figure
The equations at the two ends
f (x0 ) = a1 x02 + b1 x0 + c1
(66)
f (x3 ) = a3 x32 + b3 x3 + c3
The equations at the intersection points
f (xi ) = ai xi2 + bi xi + ci
(67)
f (xi ) = ai+1 xi2 + bi+1 xi + ci+1
The derivatives at the intersection points
df (xi )
= 2ai xi + bi
dxi
(68)
df (xi )
= 2ai+1 xi + bi+1
dxi
Henok Mekonnen Solutions of Systems of Linear Algebraic Equations April 16, 2024 39 / 48
Multi-dimensional interpolation: Bi-Linear
We interpolate about one dimension and fix the others at a time
xi − x2 xi − x1
f (xi , y1 ) = f (x1 , y1 ) + f (x2 , y1 ) (69)
x1 − x2 x2 − x1
xi − x2 xi − x1
f (xi , y2 ) = f (x1 , y2 ) + f (x2 , y2 ) (70)
x1 − x2 x2 − x1
Now xi is evaluated and we use it to interpolate for yi
yi − x2 yi − y1
f (xi , yi ) = f (xi , y1 ) + f (xi , y2 ) (71)
y1 − y2 y2 − y1

Henok Mekonnen Figure: Caption


Solutions of Systems of Linear Algebraic Equations April 16, 2024 40 / 48
Multi-dimensional interpolation: Bi-Linear

Combining the above equations


xi − x2 yi − y2 xi − x1 yi − y2
f (xi , yi ) = f (x1 , y1 ) + f (x2 , y1 )+
x1 − x2 y1 − y2 x2 − x1 y1 − y2
xi − x2 yi − y1 xi − x1 yi − y1
f (x1 , y2 ) + f (x2 , y2 )
x1 − x2 y2 − y1 x2 − x1 y2 − y1
(73)

Henok Mekonnen Solutions of Systems of Linear Algebraic Equations April 16, 2024 41 / 48
Periodic Functions and Sinusoidals
A periodic function, which repeats itself in fixed intervals is expressed
as
f (t) = f (t + T ) (74)
where T is the period
One very common periodic function is a sinusoidal function which can
be expressed either in sine or cosine functions

f (t) = A0 + C1 cos(ω0 t + θ) (75)

where A0 is the mean value, C1 is the amplitude, ω0 is the angular


frequency and θ is the phase shift or phase angle
The frequency that defines the number of cycles per unit of time can
be related to the angular frequency
1
ω0 = 2πf also f = (76)
T
Henok Mekonnen Solutions of Systems of Linear Algebraic Equations April 16, 2024 42 / 48
Periodic Functions and Sinusoidals
Graphical Representation, in two alternative forms

Figure: Sinusoidal (a) General expression in a single curve (b) Each term in
separate curves

Henok Mekonnen Solutions of Systems of Linear Algebraic Equations April 16, 2024 43 / 48
Curve fitting with Sinusoidals: Least Squares

Before proceeding to the curve fitting discussion the general


sinusoidal function defined in the previous sub-section needs to be put
into a more convenient form:

f (t) = A0 + C1 [cos(ω0 t)cosθ − sin(ω0 t)sinθ] (77)

If we assign A1 = C1 cosθ and B1 = C1 sinθ, we get

f (t) = A0 + A1 cos(ω0 t) − B1 sin(ω0 t) (78)

Fitting data with this general sinusoidal expression will have residuals
at each data point

f (t) = A0 + A1 cos(ω0 t) − B1 sin(ω0 t) + e (79)

Henok Mekonnen Solutions of Systems of Linear Algebraic Equations April 16, 2024 44 / 48
Curve fitting with Sinusoidals: Least Squares

Defining the sum of the squares of the residuals, as we did in previous


sections
n
X n
X
Sr = ei2 = (yi − [A0 + A1 cos(ω0 ti ) + B1 sin(ω0 ti )])2 (80)
i=1 i=1

Differentiating Sr with respect to the constants A0 , A1 and B1 ,


∂Sr X
= −2 (yi − [A0 + A1 cos(ω0 ti ) + B1 sin(ω0 ti )]) = 0
∂A0
∂Sr X
= −2 (yi − [A0 + A1 cos(ω0 ti ) + B1 sin(ω0 ti )]) cos(ω0 ti ) = 0
∂A1
∂Sr X
= −2 (yi − [A0 + A1 cos(ω0 ti ) + B1 sin(ω0 ti )]) sin(ω0 ti ) = 0
∂A0
(81)

Henok Mekonnen Solutions of Systems of Linear Algebraic Equations April 16, 2024 45 / 48
Curve fitting with Sinusoidals: Least Squares
In Matrix form
 P P    P 
P N P cos(ω 0 ti ) sin(ω0 ti ) A0 yi
cos 2 (ω0 ti )
P P
 cos(ω0 ti ) sin(ω
P 0 t2i )cos(ω0 ti )
 A1  =  yi cos(ω0 ti )
P P P
sin(ω0 ti ) cos(ω0 ti )sin(ω0 ti ) sin (ω0 ti ) B1 yi sin(ω0 ti )

A special case of equally spaced interval ∆t for N number of data


points where T = (N − 1)∆t,
cos(ω0 ti ) = sin(ω0 ti ) = sin(ω0 ti )cos(ω0 ti ) = 0
(82)
cos 2 (ω0 ti ) = sin2 (ω0 ti ) = 1/2
The matrix equation then becomes
    P 
N 0 0 A0 P yi
 0 N/2 0  A1  =  P yi cos(ω0 ti )
0 0 N/2 B1 yi sin(ω0 ti )
The solution for the special case is a straightforward substitution
P
yi 2 X 2 X
A0 = , A1 = yi cos(ω0 ti ), B1 = yi sin(ω0 ti ) (83)
N N N
Henok Mekonnen Solutions of Systems of Linear Algebraic Equations April 16, 2024 46 / 48
Continuous Fourier Series
The Fourier series is the Taylor series equivalent for arbitrary periodic
functions represented by an infinite series of sinusoids.
f (t) = a0 + a1 cos(ω0 t) + b1 sin(ω0 t) + a2 cos(ω0 t) + b2 sin(ω0 t) + ...
(84)
X∞
f (t) = a0 + [ak cos(kω0 t) + bk sin(kω0 t)] (85)
k=1
where ω0 = 2π/T is the fundamental frequency and its constant
multiples 2ω0 , 3ω0 , ..., are called harmonics
The coefficients are computed as
2 T
Z
ak = f (t)cos(kω0 t)dt
T 0
2 T
Z
bk = f (t)sin(kω0 t)dt (86)
T 0
1 T
Z
a0 = f (t)dt
T 0
Henok Mekonnen Solutions of Systems of Linear Algebraic Equations April 16, 2024 47 / 48
Continuous Fourier Series
The continuous Fourier series can also be expressed in exponential
function
X∞
f (t) = cek e ikw0 t (87)
k=−∞

where
i = −1
Z T /2 (88)
1
cek = f (t)e −kw0 t dt
T −T /2

The Fourier transform is


Z ∞
1
f (t) = F (iω0 )e iω0 t dω0
2π −∞
Z ∞ (89)
F (iω0 ) = f (t)e iω0 t dt
−∞

Henok Mekonnen Solutions of Systems of Linear Algebraic Equations April 16, 2024 48 / 48

You might also like