Download as doc, pdf, or txt
Download as doc, pdf, or txt
You are on page 1of 13

Random and systematic errors

Random errors: Random errors cause values to lie above


and below the true value and, due to the scatter they
create, they are usually easy to recognise. So as long as
only random errors exist, the mean of the values obtained
through measurement tends towards the true value as the
number of repeat measurements increases.

systematic error : Systematic error is that which causes


the measured values to be consistently above or
consistently below the true value. This is termed a
systematic error, also sometimes referred to as a bias error.
An example of this is the zero offset of a spring balance.
Suppose, with no mass attached to the spring, the balance
indicates 0.01 N. We can ‘correct’ for the zero offset
 if the facility is available, by adjusting the balance to
indicate zero or,
 by subtracting 0.01 N from every value obtained using
the balance.
No value, including an offset, can be known exactly so the
correction applied to values will have its own error. As a
consequence we cannot claim that an adjustment designed
to correct for an offset has completely eliminated the
systematic error – only that the error has been reduced.
Some systematic errors are difficult to identify and have
more to do with the way a measurement is made rather
than any obvious limitation in the measuring instrument.
Systematic errors have no effect on the precision of values,
but act to impair measurement accuracy.

sources of Random error : Many effects can cause values


obtained through experiment to be scattered above and
below the true value. Those effects may be traced to
limitations in a measuring instrument or related to a
human trait, such as reaction time. In some experiments,
values may show scatter due to the influence of physical
processes that are inherently random such as Johnson
noise, Radioactive decay, Molecular motion.

(1)Resolution error : A 3½ digit voltmeter on its 200 mV


scale indicates voltages to the nearest 0.1 mV, i.e. the
resolution of the instrument on this scale is 0.1 mV. If the
scale indicates 182.3 mV, the true voltage could lie
anywhere between 182.25 mV and 182.35 mV. That is, the
resolution of the instrument is a source of error. If we
assume that sometimes the voltmeter indicates ‘rounded
up’ values which are greater than the true value and at
other times indicates ‘rounded down’ values that are less
than the true value, then it is reasonable to regard the
resolution of an instrument to be a source of random error.

(2)Parallax error : If you move your head when viewing the


position of a pointer on an analogue stopwatch or the top of
a column of alcohol in an alcohol-in-glass thermometer, the
value (as read from the scale of the instrument) changes.
The position of the viewer with respect to the scale
introduces parallax error. In the case of the alcohol-in-
glass thermometer, the parallax error stems from the
displacement of the scale from the top of the alcohol
column, as shown in figure 5.2. The parallax error may be
random if the eye moves with the top of the alcohol column
such that the eye is positioned sometimes above or below
the best viewing position shown by figure 5.2(c). However,
it is possible to either consistently view the alcohol column
with respect to the scale from a position
below the column (figure 5.2(a)) or above the column (figure
5.2(b)). In such situations the parallax error would be
systematic, i.e. values obtained would be consistently
below or above the true value.

(3)Reaction time error : Clocks and watches are available


that routinely measure the time between events to the
nearest millisecond or better. If the starting and stopping of
a watch is synchronised with events ‘by hand’ then the
error introduced is often considerably larger than the
resolution of the watch. Hand timing of events introduces
errors which are typically 200 ms to 300 ms and therefore
time intervals of the order of seconds are not really suited
to hand timing. Though the timing error may be random, it
is possible that the experimenter consistently starts or
stops the watch prematurely or belatedly so that the
systematic component of the total error is larger than the
random one.
Q.1.A(ii) Write a note on t-distribution, Poisson
distribution and binomial distribution.

ANS:
The shape of the normal distribution describes well the
variability in the mean when sample sizes are large, it
describes the variability less well when sample sizes are
small. It is important to be aware of this, as many
experiments are carried out in which the number of repeat
measurements is small (say ten or less). The difficulty
stems from the assumption made that the estimate of the
standard deviation, s, is a good approximation to σ. The
variation in values is such that, for small data sets, s is not
a good approximation to σ, and the quantity

where n is the size of the sample, does not follow the


standard normal distribution, but another closely related
distribution, referred to as the ‘t’ distribution. If we write

we may study the distribution of t as n increases from n =


2 to n = ∞. The probability density function, f (t), can be
written in terms of the variable, t, as

where K(ν) is a constant which depends on the number of


degrees of freedom, ν. K(ν) is chosen so that

Figure 3.21 shows the general shape of the t probability


density function

On the face of it, figure 3.21 has a shape indistinguishable


to that of the normal distribution with a characteristic ‘bell
shape’ in evidence.

Possion distribution :

binomial distribution :
5.(ii) Explain terms True value, Population mean,
Sample mean.

ANS:
True value : The true value of a measurement is the value
that would be obtained by perfect measurement i.e in an
ideal world.

Population mean : The population mean is defined as


average of group characteristic. The group could be person,
item or things.

Sample mean : As values of a quantity usually show


variability, we take the mean, x(bar), of values obtained
through repeat measurement as the best estimate of the
true value. X(bar) is referred to as the sample mean, where

NOTE: Sample mean and sample standard deviation

Population mean and Population standard deviation


Q.1.b(ii) what is difference between precision and
accuracy.

ANS:

Precision Accuracy
I. The closer the agreement I. A particular value is
between values obtained regarded as accurate if
through repeat it is close to the true
measurement of a value.
quantity, the better is the
precision of the
measurement.

II. There is no mention of II. There is mention of


how close the measured how close the
values are to the true measured values are to
value. the true value.

III. The values obtained III. The mean of n values is


through repeat accurate when the
measurements may show mean tends to the true
little scatter but be far value as n becomes
from the true value. large BUT the deviation
of individual values
from the mean may be
quite large.
The normal distribution

The bell shaped curve appearing in figure is generated


using the probability density function f(x) is called Normal
distribution.

-----------------(1)

Figure .Frequency versus time of fall through liquid.

Properties :
I. The distribution of data set is approximately
symmetric;
II. There is a single central ‘peak’;
III. most data are clustered around that peak (and, as a
consequence, few data are found in the ‘tails’ of the
distribution).
where μ and σ are the population mean and the population
standard deviation respectively and which we used to
describe the centre and spread respectively of a data set.
Equation 1 is referred to as the normal probability density
function. It is a ‘normalised’ equation which is another way
of saying that when it is integrated between −∞ ≤ x ≤ +∞,
the value obtained is 1.

Using equation (1) we can generate the ‘bell shaped’ curve


associated with any combination of μ and σ and hence, by
integration, find the probability of obtaining a value of x
within a specified interval. Figure 3.7shows f(x) versus x for
the cases in which μ = 50 and σ = 2, 5 and 10.
The population mean, μ, is coincident with the centre of
the symmetric distribution and the standard deviation, σ,
is a measure of the spread of the data. A larger value of σ
results in a broader, flatter distribution, though the total
area under the curve remains equal to 1.
The normal distribution may be used to calculate the
probability of obtaining a value of x between the limits, x1
and x2. This is given by

--------------(2)

The steps to determine the probability are shown


symbolically and pictorially in figure 3.8. The function
given by equation (3) is usually referred to as the
cumulative distribution function, cdf, for the normal
distribution.

-----------(3)

Figure 3.8. Finding the area under the normal curve.


As an example of the use of the normal distribution, we
calculate the probability that x lies between ±σ of the mean,
μ. The limits of the integration are x1 = μ − σ, and x2 = μ +
σ. We have

---------(4)

To assist in evaluating the integral in equation (4), it is


usual to change the variable from x to z, where z is given by

-------------------------------(5)
where z is a random variable with mean of zero and
standard deviation of 1. Equation (1) reduces to.

-----------------------(6)
Equation (4) becomes

---------------(7)
The integral appearing in equation (7) cannot be evaluated
analytically and so a numerical method for solving for the
area under the curve is required.
Q.1.B(2) Define Correlation coefficient. Explain in Brief.

Ans: Correlation coefficient : Correlation coefficient are


used in statistics to measure how strong a relationship is
between two variable .

Correlation is a statistical technique that is used to


measure and describe the STRENGTH and DIRECTION of
the relationship between two variables.

1. The Direction of a Relationship The correlation


measure tells us about the direction of the relationship
between the two variables. The direction can be positive or
negative.
I. Positive: In a positive relationship both variables
tend to move in the same direction: If one variable
increases, the other tends to also increase. If one
decreases, the other tends to also.
II. Negative: In a negative relationship the variables
tend to move in the opposite directions: If one
variable increases, the other tends to decrease,
and vice-versa.

The direction of the relationship between two variables is


identified by the sign of the correlation coefficient for the
variables. Postive relation ships have a "plus" sign, whereas
negative relationships have a "minus" sign.
2. The Degree (Strength) of a Relationship
A correlation coefficient measures the degree (strength) of
the relationship between two variables. The Pearson
Correlation Coefficient measures the strength of the linear
relationship between two variables.

Two specific strengths are:


I. Perfect Relationship: When two variables are
exactly (linearly) related the correlation coefficient
is either +1.00 or -1.00. They are said to be
perfectly linearly related, either positively or
negatively.
II. No relationship: When two variables have no
relationship at all, their correlation is 0.00.

You might also like