Download as docx, pdf, or txt
Download as docx, pdf, or txt
You are on page 1of 2

Vince Carlo C.

Garcia

Discussion Every experiment is prone to three types of errors; human errors, systematic errors and random errors. Human errors are errors brought about by the experimenter. Unlike the other two errors, a human error is not inherent in the experiment, which means that these errors could only be avoided by the way the experimenter administers the experiment. One common example of human error is in the calculation of quantitative data due to negligence Systematic errors are errors inherent in the method used. A common example of a systematic error is when the method used involves weighing; sometimes the weight registered in the screen is not zero even though there is no sample weighed. The systematic error in this setup is because of the air pressures effect on the balance, this however could be resolved by zeroing the balance. The last type of error is the random error; this error is due to the fact that all measuring devices and humans have limits ergo there is no such thing as a 100% precise measurement. For example, when measuring a very small length (<1cm) is measured by a meter stick there are chances that there could be different readings; first because the meter stick is limited to measuring lengths greater than 1cm, second humans have limitations in their eyesight. It is important to note that the following are random errors and not systematic errors or human errors; first because the limited scope of measurements the meter stick can make is not a flaw and second because the humans eyesight is not also a flaw but only a limitation. Errors however can be reduced and be taken into consideration in the reporting of data by applying statistical analysis. In the experiment statistical analysis was used determine the validity of the data or to measure the precision and accuracy of the data. To measure precision and accuracy it is necessary to perform replicate measurements with different samples, in this case (10) 25 centavo coins. In measuring precision and accuracy a variety of statistical tools were applied. Before any other statistical analysis was done a Q-test was first administered to determine whether a data point is very different from other data points. It is important because it detects outliers which can ruin the statistical results which tells quality of the data. In the experiment a Q-test was first done and it was known that the data set contained no ouliers. One common tool in measuring precision was used, it is the standard deviation. Standard deviation measures how a set of data is spread out about the mean. It is important in judging whether a data set is precise because it measures the closeness of the data points to the mean or average of the data set. This means that a low standard deviation relative to the mean indicates that the data set is precise because most of the data points are close to the mean as predicted by the Gaussian or normal distribution curve. A Gaussian distribution is interpreted as a bell curve centered at the mean with about 68% of the total area lies within one standard deviation from the left and right of the mean, 95% of the area lies within two standard deviations from the left and right of the mean and about all of the area lies within 3 standard deviations from the left and right of the mean. Basically it is enough to know the mean and the standard deviation to describe the Gaussian distribution. Statistically speaking the percentages of the areas described are the density of the data points. Confidence limits were used in measuring the accuracy of the data set. In computing confidence limits it is important to note that in the mean of a data set is usually interpreted as the best approximation for the true value but for more accurate reporting of data it is safer to use a best estimate or a range of values where the true value lies. This is where the use of confidence limits becomes of great importance because instead of just giving a single value which is less likely to be the true value, confidence limits gives a range of values where the true value lies. Confidence limit is proportional to the upper confidence limit and the standard deviation while inversely proportional to the number of data points in the data set.

In the experiment a 95% confidence coefficient was used which together with the degrees of freedom would generate a tabulated upper confidence limit. A high standard deviation would mean a wide range of values for the true value to lie, while a high number of data points means a narrow range of values.

You might also like