Download as pdf or txt
Download as pdf or txt
You are on page 1of 17

4/11/2020

HCM CITY UNIVERSITY OF TECHNOLOGY AND EDUCATION


Faculty of High Quality Learning

Assoc. Prof. Dr. Pham Huy Tuan

APRIL 11, 2020

1. Linear regression
 Criteria for a “Best” Fit
 Least-Squares Fit of a Straight Line
 Quantification of Error of Linear Regression
 Linearization of Nonlinear Relationships
2. Polynomial regression
 Gauss elimination method

3. Multiple linear regression


4. Nonlinear regression
2

1
4/11/2020

 Where substantial error is associated with data, polynomial interpolation is inappropriate.

 A more appropriate strategy for such cases is to derive an approximating function that fits the
shape or general trend of the data without necessarily matching the individual points.
 derive a curve that minimizes the discrepancy between the data points and the curve called
least-squares regression
3

2
4/11/2020

Fig. 2: Equipment setup

 Fit a straight line to a set of paired observations: (x1, y1), (x2, y2), . . . , (xn, yn).
 The mathematical expression

 e - the error (residual) is the discrepancy between the true value of y and the
approximate value, a0 + a1x, predicted by the linear equation.

3
4/11/2020

 Minimize the sum of the residual errors for all the available data

 Minimize the sum of the squares of the residuals between the measured y
and the y calculated with the linear model

Teaching

4
4/11/2020

Teaching

 Problem Statement: Fit a straight line to the x and y values in the first two
columns of Table 17.1.

Teaching

 Problem Statement: Fit a straight line to the x and y values in the first two
columns of Table 17.1.

10

5
4/11/2020

Teaching

 Problem Statement: Fit a straight line to the x and y values in the first two
columns of Table 17.1.

11

Standard deviation

“Standard deviation” for the regression line

Correlation coefficient

12

6
4/11/2020

Algorithm for linear


regression

13

 A theoretical mathematical model for


the velocity of the parachutist

(b)

Where
 v = velocity (m/s),
 g = gravitational constant (9.8 m/s2),
 m = mass equal to 68.1 kg,
 c = drag coefficient of 12.5 kg/s.

 An alternative empirical model

(c)

Suppose that you would like to test and compare the


adequacy of these two mathematical models
14

7
4/11/2020

 A theoretical mathematical model for


the velocity of the parachutist

(b)

 An alternative empirical model

(c)

15

 TRANSFORMATIONS can be used to


express the data in a form that is
compatible with linear regression.
 Exponential model

 The simple power equation

 The saturation-growth-rate equation

16

8
4/11/2020

Teaching

Problem Statement. Fit Eq. (17.13) to the data in Table


17.3 using a logarithmic transformation of the data.

x y log(x) log(y)
1 0.5 0.000 -0.301
2 1.7 0.301 0.230
3 3.4 0.477 0.531
4 5.7 0.602 0.756
5 8.4 0.699 0.924

A linear regression of the log-transformed


data yields the result

Teaching

Problem Statement. Fit Eq. (17.13) to the data in Table


17.3 using a logarithmic transformation of the data.

logy=b.logx+loga Power function


1.000 9

0.800 y = 1.7517x - 0.3002 8


7
y = 0.5009x1.7517
0.600
6
0.400 5
logy

4
0.200
3
0.000 2
0.000 0.200 0.400 0.600 0.800 1
-0.200
0
-0.400
0 2 4 6
logx
x

9
4/11/2020

Teaching

Fig. 2: Equipment setup

Run 1 Run 1 Run 1 Run 1


Time (s) Position Velocity Acceleration
(m) (m/s) (m/s²)
0 0.17
0.05 0.17 0.03
0.1 0.17 0.07 0.73
0.15 0.17 0.11 0.72
0.2 0.18 0.14 0.73
0.25 0.19 0.18 0.76
0.3 0.2 0.22 0.76
0.35 0.21 0.26 0.71
0.4 0.22 0.29 0.66
0.45 0.24 0.32 0.67
0.5 0.26 0.35 0.75
0.55 0.27 0.4 0.8
0.6 0.29 0.44 0.73
0.65 0.32 0.47 0.55
0.7 0.34 0.49 0.4

Teaching

 The least-squares procedure can be readily


extended to fit the data to a higher-order
polynomial.

 The sum of the squares of the residuals

 Take the derivative with respect to each of the


unknown coefficients of the polynomial

10
4/11/2020

Teaching

Problem Statement. Fit a second-order polynomial to the


data in the first two columns of Table 17.4.

 The approach is designed to solve a general set of n equations:

 The two phases of Gauss elimination: forward


elimination and back substitution.
 The primes indicate the number of times that the
coefficients and constants have been modified.

11
4/11/2020

Teaching

 The initial step will be to eliminate the first unknown, x1, from
the second through the nth equations. To do this, multiply Eq.
(9.12a) by a21/a11.

Teaching

 A useful extension of linear regression is the


case where y is a linear function of two or
more independent variables.

 For this two-dimensional case, the


regression “line” becomes a “plane”.

12
4/11/2020

Teaching

Digital Light Processing


(DLP)

13
4/11/2020

14
4/11/2020

15
4/11/2020

16
4/11/2020

Velocity versus Time Chart


TIME RUN1 RUN2 RUN3 RUN4
0 0 0 0 0
2
-1.48E-
0.05 0.03 04 -0.01 0
1.8
0.1 0.07 0.02 0 0 y = 0.6839x + 0.0087
0.15 0.11 0.04 0.01 0 1.6
0.2 0.14 0.06 0.02 0.01
0.25 0.18 0.07 0.03 0.02 1.4
0.3 0.22 0.09 0.04 0.03
0.35 0.26 0.11 0.06 0.03 1.2

Velocity (m/s)
0.4 0.29 0.13 0.07 0.05
1
0.45 0.32 0.15 0.08 0.06 y = 0.3595x - 0.0103
0.5 0.35 0.18 0.1 0.07 0.8
0.55 0.4 0.2 0.11 0.08 y = 0.2434x - 0.0227
0.6 0.44 0.21 0.12 0.09 0.6
0.65 0.47 0.22 0.13 0.1
0.4
0.7 0.49 0.24 0.15 0.11
0.75 0.5 0.26 0.16 0.12 0.2 y = 0.1924x - 0.0259
0.8 0.52 0.28 0.18 0.13
0.85 0.56 0.3 0.19 0.15 0
0.9 0.61 0.33 0.2 0.15 0 0.5 1 1.5 2 2.5 3
0.95 0.67 0.35 0.21 0.16 -0.2
Time (s)
1 0.71 0.36 0.22 0.16
Series1 Series2 Series3 Series4
1.05 0.75 0.37 0.23 0.17
1.1 0.78 0.37 0.25 0.18 Linear (Series1) Linear (Series1) Linear (Series2) Linear (Series2)
1.15 0.81 0.37 0.26 0.19
Linear (Series3) Linear (Series3) Linear (Series4)
1.2 0.85 0.39 0.28 0.2
1.25 0.88 0.42 0.3 0.21
1.3 0.89 0.46 0.31 0.22

17

You might also like