Download as pptx, pdf, or txt
Download as pptx, pdf, or txt
You are on page 1of 39

ANOVA: Analysis of

variance
Session 15
One-Way Analysis of Variance

• Evaluate the difference Between the means of three or


more groups
Examples: Number of incidents for 1st, 2nd, and 3rd shift
Expected mileage for five brands of tires

• Assumptions
• Populations are normally distributed
• Populations have equal variances
• Samples are randomly and independently drawn
Hypotheses of One-Way ANOVA


• All population means are equal
• i.e., no factor effect (no variation in means Between
groups)


• At least one population mean is different
• i.e., there is a factor effect
• Does not mean that all population means are different
(some pairs may be the same)
One-Way ANOVA

The Null Hypothesis is True


All Means are the same:
(No Factor Effect)
One-Way ANOVA

The Null Hypothesis is NOT true


At least one of the means is different
(Factor Effect is present)

or
Visualizing ANOVA through Example
Partitioning the Variation
• Total variation can be split into two parts:

SST = SSB + SSW

SST = Total Sum of Squares


(Total variation)
SSB = Sum of Squares Between Groups
(Among-group variation)
SSW = Sum of Squares Within Groups
(Within-group variation)
Partitioning the Variation
(continued)

SST = SSB + SSW

Total Variation = the aggregate variation of the individual


data values across the various factor levels (SST)

Between-Group Variation = variation Between the factor


sample means (SSB)

Within-Group Variation = variation that exists Between


the data values within a particular factor level (SSW)
Partition of Total Variation

Total Variation (SST)

Variation Due to Variation Due to Random


= Factor (SSB) + Error (SSW)
Total Sum of Squares

SST = SSB + SSW

Where:
SST = Total sum of squares
c = number of groups or levels
nj = number of observations in group j
Xij = ith observation from group j
X = grand mean (mean of all data values)
Total Variation
(continued)
BETWEEN-Group Variation
SST = SSB + SSW

Where:
SSB = Sum of squares Between groups
c = number of groups
nj = sample size from group j
Xj = sample mean from group j
X = grand mean (mean of all data values)
BETWEEN-Group Variation
(continued)

Variation Due to
Differences Between Groups

Mean Square Between =


SSB/degrees of freedom
BETWEEN-Group Variation
(continued)
Within-Group Variation

SST = SSB + SSW

Where:
SSW = Sum of squares within groups
c = number of groups
nj = sample size from group j
Xj = sample mean from group j
Xij = ith observation in group j
Within-Group Variation
(continued)

Summing the variation


within each group and then
adding over all groups

Mean Square Within =


SSW/degrees of freedom
Within-Group Variation
(continued)
Obtaining the Mean Squares
The Mean Squares are obtained by dividing the various
sum of squares by their associated degrees of freedom

Mean Square Between


(d.f. = c-1)

Mean Square Within


(d.f. = n-c)

Mean Square Total


(d.f. = n-1)
One-Way ANOVA Table

Source of Degrees of Sum Of Mean Square F


Variation Freedom Squares (Variance)

Between SSB FSTAT =


c-1 SSB MSB =
Groups c-1
MSB
Within SSW
n-c SSW MSW = MSW
Groups n-c
Total n–1 SST

c = number of groups
n = sum of the sample sizes from all groups
df = degrees of freedom
One-Way ANOVA
F Test Statistic
H0: μ1= μ2 = … = μc
H1: At least two population means are different
• Test statistic

MSB is mean squares Between groups


MSW is mean squares within groups

• Degrees of freedom
• df1 = c – 1 (c = number of groups)
• df2 = n – c (n = sum of sample sizes from all populations)
Interpreting One-Way ANOVA & F-Statistic
• The F statistic is the ratio of the Between estimate of variance and the
within estimate of variance. HIGHER the RATIO; HIGHER the BETWEEN
VARIANCE (numerator) and LOWER the WITHIN VARIANCE-Denominator
(homogeneous within groups)
• The ratio must always be positive
• df1 = c -1 will typically be small
• df2 = n - c will typically be large

Decision Rule:
 Reject H if F
STAT > Fα,

0
otherwise do not reject
0
H0 Do not
reject H0
Reject H0


One-Way ANOVA
F Test Example

You want to compare sales in Zone 1 Zone 2 Zone 3


three zones. You select weekly 254 234 200
sales for randomly selected 5 263 218 222
weeks. At the 0.05 significance 241 235 197
level, is there a difference in 237 227 206
their sales figures? 251 216 204
One-Way ANOVA Example:
Scatter Plot
Distance
Zone 1 Zone 2 Zone 3 270
254 234 200 260 •
263 218 222 •
250 •
241 235 197 240 •
237 227 206 • ••
230
251 216 204
220 •

••
210
200 ••
••
190

1 2 3
Club
One-Way ANOVA Example
Computations
Zone 1 Zone 2 Zone 3 X1 = 249.2 n1 = 5
254 234 200 X2 = 226.0 n2 = 5
263 218 222
241 235 197 X3 = 205.8 n3 = 5
237 227 206 n = 15
251 216 204 X = 227.0
c=3
SSB = 5 (249.2 – 227)2 + 5 (226 – 227)2 + 5 (205.8 – 227)2 = 4716.4
SSW = (254 – 249.2)2 + (263 – 249.2)2 +…+ (204 – 205.8)2 = 1119.6

MSB = 4716.4 / (3-1) = 2358.2


MSW = 1119.6 / (15-3) = 93.3
One-Way ANOVA Example Solution

H0: μ1 = μ2 = μ3 Test Statistic:


H1: μj not all equal
 = 0.05
df1= 2 df2 = 12
Critical Decision:
Value:
Reject H0 at  = 0.05
Fα = 3.89
Conclusion:
 = .05
There is evidence that
0 Do not Reject H
0
at least one μj differs
reject H0
FSTAT = 25.275 from the rest
Fα = 3.89
One-Way ANOVA
Excel Output
refer tutorial sheet
SUMMARY
Groups Count Sum Average Variance
Club 1 5 1246 249.2 108.2
Club 2 5 1130 226 77.5
Club 3 5 1029 205.8 94.2
ANOVA
Source of
SS df MS F P-value F crit
Variation
Between
4716.4 2 2358.2 25.275 0.0000 3.89
Groups
Within
1119.6 12 93.3
Groups
Total 5836.0 14        
ANOVA and t-tests of 2 means

Why do we need the analysis of variance? Why not test every pair of means? For
example say k = 6. There are 6C2 = 6(5)/2= 14 different pairs of means.
1&2 1&3 1&4 1&5 1&6
2&3 2&4 2&5 2&6
3&4 3&5 3&6
4&5 4&6
5&6
• If we test each pair with α = .05 we increase the probability of making a Type I error.
If there are no differences then the probability of making at least one Type I error is
1-(.95)14 = 1 - .463 = .537
• Major shortcoming: tells us that any one or more of the pair(s) are different but no
indication on which pair is different
Multiple Comparisons
When we conclude from the one-way analysis of variance that at least
two treatment means differ (i.e. we reject the null hypothesis that H0:
), we often need to know which treatment
means are responsible for these differences.

There are three popular statistical inference procedures that allow us to


determine which population means differ:
• Fisher’s least significant difference (LSD) method
• Bonferroni adjustment, and
• Tukey’s multiple comparison method.
Multiple Comparisons
Two means are considered different if the difference between the
corresponding sample means is larger than a critical number. The
general case for this is,

IF

THEN we conclude and differ.

The larger sample mean is then believed to be associated with a larger


population mean.
Fisher’s Least Significant Difference
What is this critical number, NCritical ? Recall that in Session 13 we had the
confidence interval estimator of µ1-µ2

If the interval excludes 0 we can conclude that the population means differ. So
another way to conduct a two-tail test is to determine whether

is greater than
Fisher’s Least Significant Difference
However, we have a better estimator of the pooled variances. It is MSE. We
substitute MSE in place of sp2. Thus we compare the difference between
means to the Least Significant Difference LSD, given by:

LSD will be the same for all pairs of means if all k sample sizes are equal. If
some sample sizes differ, LSD must be calculated for each combination.
MSE==MSW=SSW/n-k
Example
The problem objective is to compare four populations, the data are
interval, and the samples are independent. The correct statistical method
is the one-way analysis of variance.
A B C D E F G
11 ANOVA
12 Source of Variation SS df MS F P-value F crit
13 Between Groups 150,884 3 50,295 4.06 0.0139 2.8663
14 Within Groups 446,368 36 12,399
15
16 Total 597,252 39

F = 4.06, p-value = .0139. There is enough evidence to infer that a


difference exists between the four bumpers. The question is now, which
bumpers differ?
Example
The sample means are

and given MSE = 12,399 [mean of SSW]. Thus

 1 1   1 1
LSD  t  / 2 MSE  
 n i n j   2.030 12,399 10  10   101.09
   
Example (continued…)
We calculate the absolute value of the differences between
means and compare them to LSD = 101.09.

Hence, µ1 and µ2, µ1 and µ3, µ2 and µ4, and µ3 and µ4 differ by
large.
The other two pairs µ1 and µ4, and µ2 and µ3 do not differ
significantly.
Bonferroni Adjustment to LSD Method…
Fisher’s method may result in an increased probability of
committing a type I error.

We can adjust Fisher’s LSD calculation by using the “Bonferroni


adjustment” which accounts for the number of pairwise
comparisons to be performed.

Where we used alpha ( ), say .05, previously, we now use and


adjusted value for alpha:

where
Example (continued..)
If we perform the LSD procedure with the Bonferroni adjustment the number of pairwise
comparisons is 6 (calculated as C = k(k − 1)/2 = 4(3)/2).

We set α = .05/6 = .0083. Thus, tα/2,36 = 2.794 (available from Excel and difficult to approximate
manually) and

.
Example Continued..
A B C D E
1 Multiple Comparisons
2
3 LSD Omega
4 Treatment Treatment Difference Alpha = 0.0083 Alpha = 0.05
5 Bumper 1 Bumper 2 -105.9 139.11 133.45
6 Bumper 3 -103.8 139.11 133.45
7 Bumper 4 31.8 139.11 133.45
8 Bumper 2 Bumper 3 2.1 139.11 133.45
9 Bumper 4 137.7 139.11 133.45
10 Bumper 3 Bumper 4 135.6 139.11 133.45

Now, none of the six pairs of means differ.


Confirming the initial assumptions of
ANOVA
• Test for normality of population – KS test or Goodness-of-fit test (next two
sessions)
• Test for common variance: Levene’s test for homogeneity of variances
• For Levene’s test of the homogeneity of group variances, the residuals ei,j
of the group means from the cell means are calculated as follows:

• An ANOVA is then conducted on the absolute value of the residuals. If the


group variances are equal, then the average size of the residual should be
the same across all groups.
Levene’s test

You might also like