Download as docx, pdf, or txt
Download as docx, pdf, or txt
You are on page 1of 3

Situation where Type II error is dangerous than Type I error

By thinking in terms of false positive and false negative results, we are better equipped to

consider which of these errors are better—Type II seems to have a negative connotation, for

good reason.

Suppose you are designing a medical screening for a disease. A false positive of a Type I error

may give a patient some anxiety, but this will lead to other testing procedures which will

ultimately reveal the initial test was incorrect. In contrast, a false negative from a Type II error

would give a patient the incorrect assurance that he or she does not have a disease when he or

she in fact does.

As a result of this incorrect information, the disease would not be treated. If doctors could choose

between these two options, a false positive is more desirable than a false negative.

Now suppose that someone had been put on trial for murder. The null hypothesis here is that the

person is not guilty. A Type I error would occur if the person were found guilty of a murder that

he or she did not commit, which would be a very serious outcome for the defendant. On the other

hand, a Type II error would occur if the jury finds the person not guilty even though he or she

committed the murder, which is a great outcome for the defendant but not for society as a whole.

Here we see the value in a judicial system that seeks to minimize Type I errors.

Reduction of Type I error

Type I error is the chance of rejecting the true sample. That is we reject the null hypothesis when

it’s actually is true at a given level of significance. The alpha is the significance level which is

the probability of committing the type I error. In the area of distribution curve the points falling
in the 5% area are rejected , thus greater the rejection area the greater are the chances that points

will fall out of a population in this rejection area and thus more probability of incorrectly

identifying true samples in the rejection area. If level of significance reduces from 5 to 1% than

the rejection area also reduces thus lower rejection area reduces the chances that points will fall

out of a population in this rejection area and thus less the probability of incorrectly identifying

true samples in the rejection area. Thus the chances of committing the type I error decrease with

reduction in the significance level alpha.

Reduction of Type II error

The probability of making a type II error is β, which depends on the power of the test. While it is

impossible to completely avoid type 2 errors, you can decrease your risk of committing a type II

error by ensuring your test has enough power. You can do this by ensuring your sample size is

large enough to detect a practical difference when one truly exists. This means running an

experiment for longer and gathering more data. This will help avoid reaching the false

conclusion that an experiment does not have any impact, when it actually does.

Difference between Practical and statistical significance

I. Statistical significance hints that a probability of relationship between two variables exists,

where s practical significance implies existence of relationship between variables and real world

scenario.

ii. Statistical significance is mathematical and sample-size centric. Practical significance arises

out of applicability of the result in decision making. Practical significance is more subjective and

depends upon external factors like cost, time, objectives, etc. apart from statistical significance.
Is the difference given in the case significant?

The survey shows a 3% difference between senior participants and sophomore to send help

centre offshore. Now the point is how much significance this 3% difference has statistically as

well as practically. Statistical significance of this 3% depends upon the size of data used in

determining the percentage of senior and sophomore participant in data collection. If a

sufficiently big sample size is used then the difference is statistically significant, and if a very

small sample size is used then the difference is statistically insignificant. Thus bigger the sample

size more is the statistical significance of a computed figure.

On the other hand practical significance of this 3% difference arises if decision is made or action

is taken or needs to be taken on the basis of this 3% difference. If cost permits, the authority may

consider placing help centre offshore by considering the result of the decision. In this case the

3% difference though small, may be practically significant.

You might also like