Download as pdf or txt
Download as pdf or txt
You are on page 1of 4

Sources of Error in Science Labs

All science experiments contain error, so it’s important to know the types of error
and how to calculate it.

(Image: NASA/GSFC/Chris Gunn)

First, let’s get clear the differences between QUALITATIVE and QUANTITATIVE research and data.
Very simple, short video: https://www.youtube.com/watch?v=dwFsRZv4oHA
More complete explanation: https://www.youtube.com/watch?v=69qP8kFJp_k
In our work, we are essentially focussing on QUANTITATIVE investigation.

Science labs usually ask you to compare your results against theoretical or known values. This helps you
evaluate your results and compare them against other people’s values. The difference between your results
and the expected or theoretical results is called error. The amount of error that is acceptable depends on the
experiment, but a margin of error of 10% is generally considered acceptable. If there is a large margin of
error, you’ll be asked to go over your procedure and identify any mistakes you may have made or places
where error might have been introduced. So, you need to know the different types and sources of error and
how to calculate them.

How to Calculate Absolute Error


One method of measuring error is by calculating absolute error, which is also called absolute uncertainty.
This measure of accuracy is reported using the units of measurement. Absolute error is simply the difference
between the measured value and either the true value or the average value of the data.
absolute error = measured value – true value
For example, if you measure gravity to be 9.6 m/s2 and the true value is 9.8 m/s2, then the absolute error of
the measurement is 0.2 m/s2. You could report the error with a sign, so the absolute error in this example
could be -0.2 m/s2.
If you measure the length of a sample three times and get 1.1 cm, 1.5 cm, and 1.3 cm, then the absolute error
is +/- 0.2 cm or you would say the length of the sample is 1.3 cm (the average) +/- 0.2 cm.
Some people consider absolute error to be a measure of how accurate your measuring instrument is. If you
are using a ruler that reports length to the nearest millimeter, you might say the absolute error of any
measurement taken with that ruler is to the nearest 1 mm or (if you feel confident you can see between one
mark and the next) to the nearest 0.5 mm.

How to Calculate Relative Error


Relative error is based on the absolute error value. It compares how large the error is to the magnitude of
the measurement. So, an error of 0.1 kg might be insignificant when weighing a person, but pretty terrible
when weighing a apple. Relative error is a fraction, decimal value, or percent.
Relative Error = Absolute Error / Total Value
For example, if your speedometer says you are going 55 mph, when you’re really going 58 mph, the
absolute error is 3 mph / 58 mph or 0.05, which you could multiple by 100% to give 5%. Relative error may
be reported with a sign. In this case, the speedometer is off by -5% because the recorded value is lower than
the true value.
Because the absolute error definition is ambiguous, most lab reports ask for percent error or percent
difference.

How to Calculate Percent Error


The most common error calculation is percent error, which is used when comparing your results against a
known, theoretical, or accepted value. As you probably guess from the name, percent error is expressed as a
percentage. It is the absolute (no negative sign) difference between your value and the accepted value,
divided by the accepted value, multiplied by 100% to give the percent:
% error = [accepted – experimental ] / accepted x 100%

How to Calculate Percent Difference


Another common error calculation is called percent difference. It is used when you are comparing one
experimental result to another. In this case, no result is necessarily better than another, so the percent
difference is the absolute value (no negative sign) of the difference between the values, divided by the
average of the two numbers, multiplied by 100% to give a percentage:
% difference = [experimental value – other value] / average x 100%

Sources and Types of Error


Every experimental measurement, no matter how carefully you take it, contains some amount of uncertainty
or error. You are measuring against a standard, using an instrument that can never perfectly duplicate the
standard, plus you’re human, so you might introduce errors based on your technique. The three main
categories of errors are systematic errors, random errors, and personal errors. Here’s what these types of
errors are and common examples.

1. Systematic Errors
Systematic error affects all the measurements you take. All of these errors will be in the same direction
(greater than or less than the true value) and you can’t compensate for them by taking additional data.
Examples of Systematic Errors

 If you forget to calibrate a balance or you’re off a bit in the calibration, all mass measurements will be
high/low by the same amount. Some instruments require periodic calibration throughout the course of an
experiment, so it’s good to make a note in your lab notebook to see whether the calibrations appears to
have affected the data.
 Another example is measuring volume by reading a meniscus (parallax). You likely read a meniscus
exactly the same way each time, but it’s never perfectly correct. Another person taking the reading may
take the same reading, but view the meniscus from a different angle, thus getting a different result.
Parallax can occur in other types of optical measurements, such as those taken with a microscope or
telescope.
 Instrument drift is a common source of error when using electronic instruments. As the instruments
warm up, the measurements may change. Other common systematic errors include hysteresis or lag
time, either relating to instrument response to a change in conditions or relating to fluctuations in an
instrument that hasn’t reached equilibrium. Note some of these systematic errors are progressive, so data
becomes better (or worse) over time, so it’s hard to compare data points taken at the beginning of an
experiment with those taken at the end. This is why it’s a good idea to record data sequentially, so you
can spot gradual trends if they occur. This is also why it’s good to take data starting with different
specimens each time (if applicable), rather than always following the same sequence.
 Not accounting for a variable that turns out to be important is usually a systematic error, although it
could be a random error or a confounding variable. If you find an influencing factor, it’s worth noting in
a report and may lead to further experimentation after isolating and controlling this variable.

2. Random Errors
Random errors are due to fluctuations in the experimental or measurement conditions. Usually these errors
are small. Taking more data tends to reduce the effect of random errors.
Examples of Random Errors

 If your experiment requires stable conditions, but a large group of people stomp through the room during
one data set, random error will be introduced. Drafts, temperature changes, light/dark differences, and
electrical or magnetic noise are all examples of environmental factors that can introduce random errors.
 Physical errors may also occur, since a sample is never completely homogeneous. For this reason, it’s
best to test using different locations of a sample or take multiple measurements to reduce the amount of
error.
 Instrument resolution is also considered a type of random error because the measurement is equally
likely higher or lower than the true value. An example of a resolution error is taking volume
measurements with a beaker as opposed to a graduated cylinder. The beaker will have a greater amount
of error than the cylinder.
 Incomplete definition can be a systematic or random error, depending on the circumstances. What
incomplete definition means is that it can be hard for two people to define the point at which the
measurement is complete. For example, if you’re measuring length with an elastic string, you’ll need to
decide with your peers when the string is tight enough without stretching it. During a titration, if you’re
looking for a color change, it can be hard to tell when it actually occurs.

3. Personal Errors
When writing a lab report, you shouldn’t cite “human error” as a source of error. Rather, you should attempt
to identify a specific mistake or problem. One common personal error is going into an experiment with a
bias about whether a hypothesis will be supported or rejects. Another common personal error is lack of
experience with a piece of equipment, where your measurements may become more accurate and reliable
after you know what you’re doing. Another type of personal error is a simple mistake, where you might have
used an incorrect quantity of a chemical, timed an experiment inconsistently, or skipped a step in a protocol.

Visualising Errors – Error Bars


Let’s keep it simple! If there is a calculated quantitative margin of error, then this can be plotted simply
enough on a graph of results, using error bars.
Reproduced and modified from Science Notes: https://sciencenotes.org/error-in-science/

You might also like