Insppt 6

You might also like

Download as pdf or txt
Download as pdf or txt
You are on page 1of 21

Contents

• Introduction
• Causes and Types of Experimental Errors
• Uncertainty Analysis
• Statistical Analysis of Experimental Data
• Graphical Analysis and Curve fitting
• Choice of Graph Formats
Introduction-I
 Some form of analysis must be performed on all experimental data. The analysis may be
a simple verbal appraisal of the test results, or it may take the form of a complex
theoretical analysis of the errors involved in the experiment and matching of the data
with fundamental physical principles.
 Even new principles may be developed in order to explain some unusual phenomenon.
 Many considerations enter into a final determination of the validity of the results of
experimental data, and we wish to present some of these considerations in this
presentation.
Introduction-II
 The elimination of data points must be consistent and should not be dependent on
human whims and bias based on what “ought to be.”
 In many instances it is very difficult for the individual to be consistent and unbiased.
The pressure of a deadline, disgust with previous experimental failures, and normal
impatience all can influence rational thinking processes.
 However, the competent experimentalist will strive to maintain consistency in the
primary data analysis.
Causes and Types of Errors-I

 Single-sample data are those in which some uncertainties may not be discovered by
repetition.
 Multi-sample data are obtained in those instances where enough experiments are
performed so that the reliability of the results can be assured by statistics but
sometime cost will prohibit the collection of multi-sample data.
Example-If one measures pressure with a single pressure gage for the entire set of
observations, then some of the error that is present in the measurement will be sampled
only once no matter how many times the reading is repeated. Consequently, such an
experiment is a single-sample experiment. On the other hand, if more than one pressure
gage is used for the same total set of observations, then a multi-sample experiment has
been performed. The number of observations will then determine the success of this
multi-sample experiment in accordance with accepted statistical principles.
Causes and Types of Errors-II

 The real errors in experimental data are those factors that are always vague to some
extent and carry some amount of uncertainty.
 Our task is to determine just how uncertain a particular observation may be and to
devise a consistent way of specifying the uncertainty in analytical form.
 Since the magnitude of error is always uncertain, it is better to say experimental
uncertainty than experimental error.
Types of Error
 There can always be those gross blunders in apparatus or instrument
construction which may invalidate the data. Hopefully, the careful
experimenter will be able to eliminate most of these errors.
 Fixed errors which will cause repeated readings to be in error by roughly the
same amount but for some unknown reason. These fixed errors are
sometimes called systematic errors, or bias errors.
 The random errors, which may be caused by personal fluctuations, random
electronic fluctuations in the apparatus or instruments, various influences of
friction, and so forth. These random errors usually follow a certain statistical
distribution, but not always.
Uncertainty Analysis
Suppose a set of measurements is made and the uncertainty in each measurement may be
expressed with the same odds. These measurements are then used to calculate some
desired result of the experiments. We wish to estimate the uncertainty in the calculated
result on the basis of the uncertainties in the primary measurements. The result R is a
given function of the independent variables x1, x2, x3, . . . , xn. Thus,

Let wR be the uncertainty in the result and w1 , w2 , . . . , wn be the uncertainties in the


independent variables. If the uncertainties in the independent variables are all given with
the same odds, then the uncertainty in the result having these odds is given as

…………………………(1)
Uncertainty for product Functions
In many cases the result function takes the form of a product of the respective primary
variables raised to exponents and expressed as

When the partial differentiations are performed, we obtain

Dividing by R, We have

Therefore
Uncertainty for Additive Functions

When the result function has an additive form, R will be expressed as

and the partial derivatives

The uncertainty in the result may then be expressed as


Statistical Analysis of Experimental Data

Definition of some pertinent terms

 When a set of readings of an instrument is taken, the individual readings will vary
somewhat from each other, and the experimenter may be concerned with the mean of
all the readings. If each reading is denoted by xi and there are n readings, the arithmetic
mean is given by

The deviation di for each reading is defined by

We may note that the average of the deviations of all the readings is zero since

The average of the absolute values of the deviations is given by


Note that this quantity is not necessarily zero.
The standard deviation or root-mean-square deviation is defined by

and the square of the standard deviation 𝜎 2 is called the variance. This is sometimes called
the population or biased standard deviation

For small sets of data an unbiased or sample standard deviation is defined by

 Sometimes it is appropriate to use a geometric mean when studying phenomena which


grow in proportion to their size. This would apply to certain biological processes
Method of Least Squares

Suppose we have a set of observations x1, x2, . . . , xn. The sum of the squares of their
deviations from some mean value is

where n is the number of observations & xm the mean value which minimizes the sum of
the squares of the deviations is thearithmetic mean.
We seek an equation of the form

We therefore wish to minimize the quantity

This is accomplished by setting the derivatives with respect to a and b equal to zero.
Performing these operations, there results

Solving both equations simultaneously gives ,


Designating the computed value of y as 𝑦, we have

and the standard error of estimate of y for the data is


Standard error =

The method of least squares may also be used for determining higher-order polynomials
for fitting data. One only needs to perform additional differentiations to determine
additional constants.
The Correlation Coefficient

After assuming suitable correlation between vairables x and y by using least squares. We
now want want to know how good is the fit and the parameter which convey this
information is correlation coefficient r defined by,

where σy is the standard deviation of y given as

The division by n − 2 results from the fact that we have used the two derived variables a and
b in determining the value of yic. Which might removes 2 degrees of freedom from the
system of data. The correlation coefficient r may also
be written as

where, now, 𝑟 2 is called the coefficient of determination.


Note-For a perfect fit σy,x = 0 because there are no deviations between the data and the
correlation. In this case r = 1.0.
If σy = σy,x, we obtain r = 0, indicating a poor fit or substantial scatter around the fitted line.
Multivariable Regression

The least-squares method may be extended to perform a regression analysis for more than
one variable. In the linear case we would have the form

where the xn are the independent variables. For only two variables we form the sum of the
squares

and minimize this sum with the differentiations:

This set of linear equations may then be solved for the coefficients m1,m2, and b.
Multivariable regression calculations can become rather involved and are best performed
with a computer.
Standard Deviation of the Mean

We have taken the arithmetic mean value as the best estimate of the true value of a set of
experimental measurements. The question is how good (precise) this mean which is taken
as the best estimate of the true value of a set of readings? To obtain an experimental
answer to this question it would be necessary to repeat the set of measurements and to
find a new arithmetic mean. It turns out that the problem may be resolved with a statistical
analysis which we shall not present here. The result is

where σm = standard deviation of the mean value


σ = standard deviation of the set of measurements
n = number of measurements in the set
Student’s t-distribution
For small sample (n<10) the pervious relation is somewhat unreliable. A better relation
given by Student by introducing t in the relation

where, t is given by

where n = number of observations


𝑥 = mean of n observations
X = mean of normal population which the samples are taken from

Student then developed a distribution function f(t) such that

Where K0 is a constant which depends on n and ν is (n−1) degrees of freedom. When n→∞,
the distribution function approaches the normal distribution.
Graphical Analysis and Curve Fitting

 Engineers plot many curves of experimental data to extract significant facts. The better
one understands the physical phenomena involved in a certain experiment, the better
one is able to extract a wide variety of information from graphical displays of
experimental data.
 Blind curve-plotting and cross-plotting usually generate an excess of displays, which are
confusing.
 If the experimenter has a good idea of the type of function that will represent the data,
then the type of plot is easily selected. It is possible to estimate the functional form that
the data will take on the basis of theoretical considerations and the results of previous
experiments of a similar nature.
Choice of Formats

 While bar charts, column charts, pie charts, and similar types of displays have some
applications, by far the most frequently used display is the x-y graph with choices of
coordinates to match the situation.

Table 1. Methods of plotting various functios to obtain straight line

You might also like