Download as docx, pdf, or txt
Download as docx, pdf, or txt
You are on page 1of 2

What Is a Confidence Interval?

A confidence interval is defined as the range of values that we observe in our


sample and for which we expect to find the value that accurately reflects the
population.

Confidence, in statistics, is another way to describe probability. Analysts often use


confidence intervals than contain either 95% or 99% of expected observations. For
example, if you construct a confidence interval with a 95% confidence level, you are confident
that 95 out of 100 times the estimate will fall between the upper and lower values specified by
the confidence interval.

Your desired confidence level is usually one minus the alpha (α) value you used in
your statistical test:

Confidence level = 1 − a

So if you use an alpha value of p < 0.05 for statistical significance, then your confidence
level would be 1 − 0.05 = 0.95, or 95%.

Hypothesis testing

When interpreting research findings, researchers need to assess whether these findings
may have occurred by chance. Hypothesis testing is a systematic procedure for
deciding whether the results of a research study support a particular theory which
applies to a population.
Hypothesis testing uses sample data to evaluate a hypothesis about a population. A
hypothesis test assesses how unusual the result is whether it is reasonable chance
variation or whether the result is too extreme to be considered chance variation.
The purpose of hypothesis testing is to test whether the null hypothesis (there is no
difference, no effect) can be rejected or approved. If the null hypothesis is rejected, then
the research hypothesis can be accepted. If the null hypothesis is accepted, then the
research hypothesis is rejected.

Normal distribution

A normal distribution is a bell-shaped frequency distribution curve. Most of the data


values in a normal distribution tend to cluster around the mean. The further a data point
is from the mean, the less likely it is to occur. Mean, mode and median are equal. There
are many things, such as intelligence, height, and blood pressure, that naturally follow a
normal distribution. For example, if you took the height of one hundred 22-year-old
women and created a histogram by plotting height on the x-axis, and the frequency at
which each of the heights occurred on the y-axis, you would get a normal distribution.

All forms of (normal) distribution share the following characteristics:

1. It is symmetric

A normal distribution comes with a perfectly symmetrical shape. This means that the
distribution curve can be divided in the middle to produce two equal halves. The
symmetric shape occurs when one-half of the observations fall on each side of the
curve.

2. The mean, median, and mode are equal

The middle point of a normal distribution is the point with the maximum frequency,
which means that it possesses the most observations of the variable. The midpoint is
also the point where these three measures fall. The measures are usually equal in a
perfectly (normal) distribution.

3. Empirical rule

In normally distributed data, there is a constant proportion of distance lying under the
curve between the mean and specific number of standard deviations from the mean.
For example, 68.25% of all cases fall within +/- one standard deviation from the mean.
95% of all cases fall within +/- two standard deviations from the mean, while 99% of all
cases fall within +/- three standard deviations from the mean.

4. Skewness and kurtosis

Skewness and kurtosis are coefficients that measure how different a distribution is from
a normal distribution. Skewness measures the symmetry of a normal distribution while
kurtosis measures the thickness of the tail ends relative to the tails of a normal
distribution.

You might also like