Professional Documents
Culture Documents
Insppt 6
Insppt 6
Insppt 6
• Introduction
• Causes and Types of Experimental Errors
• Uncertainty Analysis
• Statistical Analysis of Experimental Data
• Graphical Analysis and Curve fitting
• Choice of Graph Formats
Introduction-I
Some form of analysis must be performed on all experimental data. The analysis may be
a simple verbal appraisal of the test results, or it may take the form of a complex
theoretical analysis of the errors involved in the experiment and matching of the data
with fundamental physical principles.
Even new principles may be developed in order to explain some unusual phenomenon.
Many considerations enter into a final determination of the validity of the results of
experimental data, and we wish to present some of these considerations in this
presentation.
Introduction-II
The elimination of data points must be consistent and should not be dependent on
human whims and bias based on what “ought to be.”
In many instances it is very difficult for the individual to be consistent and unbiased.
The pressure of a deadline, disgust with previous experimental failures, and normal
impatience all can influence rational thinking processes.
However, the competent experimentalist will strive to maintain consistency in the
primary data analysis.
Causes and Types of Errors-I
Single-sample data are those in which some uncertainties may not be discovered by
repetition.
Multi-sample data are obtained in those instances where enough experiments are
performed so that the reliability of the results can be assured by statistics but
sometime cost will prohibit the collection of multi-sample data.
Example-If one measures pressure with a single pressure gage for the entire set of
observations, then some of the error that is present in the measurement will be sampled
only once no matter how many times the reading is repeated. Consequently, such an
experiment is a single-sample experiment. On the other hand, if more than one pressure
gage is used for the same total set of observations, then a multi-sample experiment has
been performed. The number of observations will then determine the success of this
multi-sample experiment in accordance with accepted statistical principles.
Causes and Types of Errors-II
The real errors in experimental data are those factors that are always vague to some
extent and carry some amount of uncertainty.
Our task is to determine just how uncertain a particular observation may be and to
devise a consistent way of specifying the uncertainty in analytical form.
Since the magnitude of error is always uncertain, it is better to say experimental
uncertainty than experimental error.
Types of Error
There can always be those gross blunders in apparatus or instrument
construction which may invalidate the data. Hopefully, the careful
experimenter will be able to eliminate most of these errors.
Fixed errors which will cause repeated readings to be in error by roughly the
same amount but for some unknown reason. These fixed errors are
sometimes called systematic errors, or bias errors.
The random errors, which may be caused by personal fluctuations, random
electronic fluctuations in the apparatus or instruments, various influences of
friction, and so forth. These random errors usually follow a certain statistical
distribution, but not always.
Uncertainty Analysis
Suppose a set of measurements is made and the uncertainty in each measurement may be
expressed with the same odds. These measurements are then used to calculate some
desired result of the experiments. We wish to estimate the uncertainty in the calculated
result on the basis of the uncertainties in the primary measurements. The result R is a
given function of the independent variables x1, x2, x3, . . . , xn. Thus,
…………………………(1)
Uncertainty for product Functions
In many cases the result function takes the form of a product of the respective primary
variables raised to exponents and expressed as
Dividing by R, We have
Therefore
Uncertainty for Additive Functions
When a set of readings of an instrument is taken, the individual readings will vary
somewhat from each other, and the experimenter may be concerned with the mean of
all the readings. If each reading is denoted by xi and there are n readings, the arithmetic
mean is given by
We may note that the average of the deviations of all the readings is zero since
and the square of the standard deviation 𝜎 2 is called the variance. This is sometimes called
the population or biased standard deviation
Suppose we have a set of observations x1, x2, . . . , xn. The sum of the squares of their
deviations from some mean value is
where n is the number of observations & xm the mean value which minimizes the sum of
the squares of the deviations is thearithmetic mean.
We seek an equation of the form
This is accomplished by setting the derivatives with respect to a and b equal to zero.
Performing these operations, there results
The method of least squares may also be used for determining higher-order polynomials
for fitting data. One only needs to perform additional differentiations to determine
additional constants.
The Correlation Coefficient
After assuming suitable correlation between vairables x and y by using least squares. We
now want want to know how good is the fit and the parameter which convey this
information is correlation coefficient r defined by,
The division by n − 2 results from the fact that we have used the two derived variables a and
b in determining the value of yic. Which might removes 2 degrees of freedom from the
system of data. The correlation coefficient r may also
be written as
The least-squares method may be extended to perform a regression analysis for more than
one variable. In the linear case we would have the form
where the xn are the independent variables. For only two variables we form the sum of the
squares
This set of linear equations may then be solved for the coefficients m1,m2, and b.
Multivariable regression calculations can become rather involved and are best performed
with a computer.
Standard Deviation of the Mean
We have taken the arithmetic mean value as the best estimate of the true value of a set of
experimental measurements. The question is how good (precise) this mean which is taken
as the best estimate of the true value of a set of readings? To obtain an experimental
answer to this question it would be necessary to repeat the set of measurements and to
find a new arithmetic mean. It turns out that the problem may be resolved with a statistical
analysis which we shall not present here. The result is
where, t is given by
Where K0 is a constant which depends on n and ν is (n−1) degrees of freedom. When n→∞,
the distribution function approaches the normal distribution.
Graphical Analysis and Curve Fitting
Engineers plot many curves of experimental data to extract significant facts. The better
one understands the physical phenomena involved in a certain experiment, the better
one is able to extract a wide variety of information from graphical displays of
experimental data.
Blind curve-plotting and cross-plotting usually generate an excess of displays, which are
confusing.
If the experimenter has a good idea of the type of function that will represent the data,
then the type of plot is easily selected. It is possible to estimate the functional form that
the data will take on the basis of theoretical considerations and the results of previous
experiments of a similar nature.
Choice of Formats
While bar charts, column charts, pie charts, and similar types of displays have some
applications, by far the most frequently used display is the x-y graph with choices of
coordinates to match the situation.