1.1 Treatment and Interpreation of Engineering Data

You might also like

Download as pdf or txt
Download as pdf or txt
You are on page 1of 20

Treatment and Interpretation of Engineering Data

1. Graphical representation

 Propose  to show what has happened


 to show a relationships between quantities
 to show distribution

 General types of graphs


 Time series (a); Scatter plot (b); Histogram (c)
6 6
(a) (b) (c)
4 4
B (unit)

B (unit)
2 2

0
0
0 1 2 3 4 5
A B C D E F G H
A (unit)
 What about the axes?
 Horizontal axis (x-axis) => Independent variable, explanatory variable
 Vertical axis (y-axis) => Dependent variable, response variable

 What's interesting?
 Slope of tangent: Rate of change of y with x
 Area under curve: Cumulative product of x and y
y

y2

y = y2-y1
y1
x = x2-x1
Slope = y/x

x
x1
x2
General shape of curve fit families

Logarithm

http://www.ncsu.edu/labwrite/res/gt/gt-menu.html
2. Empirical equations
Example 25
x y 20
0.2 3.2
15

y
0.4 3.7
1.0 4.1
10
y  a  bx
5
2.0 8.1
0
3.0 13.7 0.0 1.0 2.0 3.0 4.0 5.0
4.0 22.6 x

y  ax n

???
y  c  ax n

 3.2  ax n
2.1 Evaluation of constants
 Direct reading from curve  very simple
From curve
x y
0.2 3.2 At x = 0.6 => y-3.2 = 0.4; and
at x = 4.5 => y-3.2 = 25
0.4 3.7
0.4  a0.6
n
1.0 4.1
Then (1)

25  a4.5
2.0 8.1 n
(2)
3.0 13.7
Solving; a = 1.15; n = 2.05
4.0 22.6
or y  3.2  1.15x 2.05
Check
x y ycal R = ycal- y R2
0.2 3.2 3.2 0 0
0.4 3.7 3.5 -0.2 0.04
1.0 4.1 4.4 0.3 0.09
2.0 8.1 8.0 -0.1 0.01 Not good
3.0 13.7 14.1 0.4 0.16
4.0 22.6 22.9 0.3 0.09
R = 0.7  R2 = 0.39
 Method of average R  ycal  y and R  0

Example
Step 1. Classified the data into group: No. of group = No. of constant
x y
Step 2. Define the equation: y  c  ax n
 
0.2 3.2
0.4 3.7
Step 3. Given the residue: R  ycal  y  c  ax n  y
R1  c  a0.2  3.2
n
1.0 4.1 then (1.1)
R2  c  a0.4  3.7
n
2.0 8.1 (1.2)
3.0 13.7 R3  c  a1.0  4.1
n
(2.1)
4.0 22.6 R4  c  a2.0  8.1
n
(2.2)
R5  c  a3.0  13.7 (3.1)
n

R6  c  a4.0  22.6
n
(3.2)
Step 4. For each group: R  0 ; then
Solving: a = 1.07, c = 3.3
2c  a0.2  0.4
n n
  6.9  0 (a) n = 2.08

2c  a 1  2
n
 12.2  0
n
(b) then y  3.3  1.07 x 2.08

2c  a3  4   36.3  0 R = 0.2; R2 = 0.27


n n
(c)
Solving: a = 1.07, c = 3.3, n = 2.08 y  3.3  1.07 x 2.08

x y ycal R = ycal- y R2
0.2 3.2 3.34 0.14 0.018942
0.4 3.7 3.46 -0.24 0.058033
1.0 4.1 4.37 0.27 0.0729
2.0 8.1 7.82 -0.28 0.076155
3.0 13.7 13.81 0.11 0.013151
4.0 22.6 22.43 -0.17 0.029599
R = -0.2  R2 = 0.27
 Method of Least Squares Linear regression

Mean ( y) y 
y i
; i = 1,…, n
n

 y  y
2
St
Standard deviation (Sy) Sy  i

n 1 n 1


 iy  y 2
St
Varian (Sy2) Sy  
2

n 1 n 1
Sy
Coefficient of variation (c.v.) c.v.  100%
y

Linear regression

Empirical equation: ycal  a  bx

Residual or error: R  ycal  y  a  bx   y


n n
Sum of the squares of the residuals: Sr  R i
2
  a  bxi   yi 2

i 1 i 1
S r
Constant a and b can be calculated by  2 a  bxi  yi 
a

S r
 2 a  bxi  yi xi 
b

S r S r
Setting  0 and  0 ; then
a b
S r
 2 a  bxi  yi   0  a  bx  y   0  a   bx   y  0
a i i i i

 a  na na   xi b  y i (*)

S r
 2 a  bxi  yi xi   0  a  bx  y x   0
b i i i

 ax   bx
i
2
i  x y i i
(**)

n xi yi   xi  yi
Solving b  and a  y  bx
n x   xi 
2 2
i
Example

xi yi xiyi xi2
Fit a straight line to the x and y values of the
following data
1 0.5 0.5 1.0
2 2.5 5.0 4.0 From data n  7 x y
i i  119.5
3 2.0 6.0 9.0
4 4.0 16.0 16.0
x 2
i  140 x i  28
28
5 3.5 17.5 25.0  yi  24.0 x 
7
 4
6 6.0 36.0 36.0
24
7 5.5 38.5 49.0 y   3.42857
7
 28 24.0 119.5 140

From n xi yi   xi  yi 7119.5  2824.0


b    0.8393
n x   xi  7140.0  282
2 2
i

a  y  bx  3.42857  0.83934  0.0714

 y  0.0714  0.8393x
Error in linear regression
Sum of squares of difference between data and their average
n 2

St   y
i 1
i  y

 y  y
2
i
St
Sy   i 1
“Standard deviation”
n 1 n 1

Standard error of estimation difference between data and regression model

Sr
Sy/ x 
n2

 a 
n n n n

R  y  yi   y  ai  bxi 
2
when S r  i
2
 cal
2
i  bxi  yi  i
2

i 1 i 1 i 1 i 1

St  S r
Coefficient of determination R2  If…Sr = 0 and R = R2 = 1  ??
St

Correlation coefficient R  R 2 If…Sr = St and R = R2 = 0  ??


Distribution of standard error around regression equation  Normal curve
y y

x x
Distribution of data around mean Distribution of data around the best-fit line
of the dependent variables

Example of linear regression with residual errors


y y

x x
Linear regression with small residual errors Linear regression with large residual errors
Application of linear regression

 Power function
ln y
Empirical equation:
Curve fitting
Slope = B
y  Ax B
ln y  ln A  B ln x
ln x

Linear Intercept = ln A
y
regression

n n n
x n ln xi ln yi    ln xi  ln yi 
b  i 1 i 1 i 1
2
n
 n 
n ln xi     ln xi 
2

i 1  i 1 
n n

 ln y   b ln x 
i i
a  i 1 i 1
 ln y  bln x
n
Where B  b and A  exp(a)
 Exponential function
Empirical equation: ln y
Curve fitting

y  Ae Bx  A exp Bx  ln y  ln A  Bx
Slope = B

Linear
y
regression x
Intercept = ln A

x
n n n
n xi ln yi    xi  ln yi 
b  i 1 i 1 i 1
2
n
 n 
n xi     xi 
2

i 1  i 1 
n n

 ln y   b x 
i i
a  i 1 i 1
 ln y  b x
n

Where B  b and A  exp(a)


 Logarithmic function
Empirical equation: y
Curve fitting
y  a  b ln x y  a  b ln x
Slope = b

y Linear
regression ln x
Intercept = a

x
n n n
n  yi ln xi    ln xi   yi 
b  i 1 i 1 i 1
2
n
 n 
n ln xi     ln xi 
2

i 1  i 1 
n n

  y   b ln x 
i i
a  i 1 i 1
 y  bln x
n
 Saturation growth rate function
Empirical equation: 1/y
Curve fitting
x 1 1 B
y  A  
Bx y A Ax Slope = B/A

Linear
y regression 1/x
Intercept = 1/A

x 1 1 n 1n 1
n
n         
i 1  yi xi  i 1  xi  i 1  yi 
b  2
 1   n  1 
2
n
n        
i 1  xi   i 1  xi  

1
n n
1
  y    x 
 
i 1  i 
 b
i 1  i 
a   1 / y  b1 / x
n

Where A  1/a and B  bA


Example
A chemical engineer is studying the rate at which a reactant is consumed
in a chemical reaction involving the following data. Define the relationship of
the concentration and reaction rate by using the linear regression concept.

Concentration (mole/ft3) 100 80 60 40 20 10 5 1


Reaction rate (mole/s) 2.85 2.00 1.25 0.67 0.22 0.072 0.024 0.0018

Solution
3
It is in the power form
2

y  Ax B ln y  ln A  B ln x
0
Reaction rate (mole/s)

ln(y)

-2
-4
-6
2 -8
0 1 2 3 4 5
ln(x)
n n n
n ln xi ln yi    ln xi  ln yi 
1
b  i 1 i 1 i 1
2
n
 n 
n ln xi     ln xi 
2

0 i 1  i 1 
0 20 40 60 80 100 n n

Concentration (mole/ft3)
 ln y   b ln x 
i i
a  i 1 i 1
 ln y  bln x
n
Where B  b and A  exp (a)
xi yi ln(xi) ln(yi) ln(xi)ln(yi) (ln(xi))2 From data
100 2.85 4.605 1.047 4.823 21.208
n  8
80 2.00 4.382 0.693 3.037 19.202
lnx  lny
i i
 23.6812.63  299.08
60 1.25 4.094 0.223 0.914 16.763
40 0.67 3.689 -0.400 -1.477 13.607  ln x  23.68
i

20 0.22 2.996 -1.514 -4.536 8.974  ln y  12.63


i

 ln x   87.648
2
10 0.072 2.303 -2.631 -6.058 5.302
i
5 0.024 1.609 3.730 -6.003 2.590
 ln x   23.6823.68
i
2
 560.74
1 0.0018 0.000 -6.320 0.000 0.000
23.68  12.63
 316 7.09 23.68 -12.63 -9.300 87.648 ln x   2.96 ln y   1.58
8 8

n xi yi   xi  yi n lnxi lnyi   lnxi  lnyi 8 9.30  23.68 12.63


From b     1.5988
n xi2   xi  n lnxi 2   lnxi  887.648  560.74
2 2

a  y  bx  lny  blnx  1.58  1.59882.96  6.312 y = 0.0018x1.599

Reaction rate (mole/s)


2 R2 = 1

 A  exp a   exp (6.312)  0.00182 1

By Excel
B  b  1.5989 0
0 20 40 60 80 100

Solution y  0.00182 x 1.5989


Concentration (mole/ft3)
 Polynomial function
Empirical equation: y  a0  a1 x  ...  ak x k

 y  a 
n 2

Sr   a1 xi  ...  ak xi
k
Residual i 0
i 1

Partial derivatives (dropping superscripts)


S r
   
n
 2 yi  a0  a1 xi  ...  ak xi
k
0
a0 i 1

S r
   x
n
 2 yi  a0  a1 xi  ...  ak xi  0
k

a1
i
i 1

S r
   x
n
 2 yi  a0  a1 xi  ...  ak xi  0
k k

ak
i
i 1

n n n
Solution a0 n  a1  xi  ...  ak  x k
i  y i
i 1 i 1 i 1

n n n n
a0  xi  a1  x  ...  ak  x 2
i
k 1
i  x y i i
i 1 i 1 i 1 i 1

n n n n
a0  x  a1  x
k
i
k 1
i  ...  ak  x 2k
i  x k
i yi
i 1 i 1 i 1 i 1
Example
Empirical equation: Empirical equation:

y  a0  a1 x y  a0  a1 x  a2 x 2

 y  a 
n 2
Residual Sr   a1 xi  a2 xi
2
Residual i 1
i 0

S r
   
2 n
 2 yi  a0  a1 xi  a2 xi
n

 y  a  a1 xi 
2
Sr  i 0 a0
0
i 1
i 1

S r
   x
n
S r n
 2 yi  a0  a1 xi  a2 xi
 2  yi  a0  a1 xi   0  0
2

a1
i
a0 i 1 i 1

S r
   x
n
S r  2 yi  a0  a1 xi  a2 xi  0
n 2 2
 2  yi  a0  a1 xi  xi  0 a2 i 1
i
a1 i 1

n n n n n
or na0  a1  xi  y i 1 or na0  a1  xi  a2  x 2
i
 y i 1
i 1 i 1 i 1 i 1 i 1
n n n n
n n n a0  xi  a1  x 2
 a2  x  3
x y 2
a0  xi  a1  x x y
i i i i
2
i  i i 2 i 1 i 1 i 1 i 1
i 1 i 1 i 1
n n n n
a0  x  a1  x  a2  x
2
i
3
i
4
i  x 2
i yi 3
i 1 i 1 i 1 i 1

You might also like