Download as pdf or txt
Download as pdf or txt
You are on page 1of 20

ANCOVA

• The one-way ANCOVA (analysis of covariance) can be thought of as an extension of the one
way ANOVA to incorporate a covariate.
• Like the one-way ANOVA, the one-way ANCOVA is used to determine whether there are any
significant differences between two or more groups on a dependent variable.
• Compared to the one-way ANOVA, the one-way ANCOVA has the additional benefit of allowing
you to "statistically control" for a third variable (sometimes known as a "confounding variable“,
concomitant”, or “control variable”), which you believe will affect your results. This third variable
that could be confounding your results is called the covariate (variables you don’t want to study)
and you include it in your one-way ANCOVA analysis.

Example
The researcher wanted to know if the different methods of teaching affect students’
performance in math. She gave a pretest on math performance prior to the treatment and then
posttest after the treatment. In this example, the pretest score is the covariate because we believe
that those who got high in the pretest will likely to get high in the posttest.
• Another difference is that, the ANOVA looks for differences in the group means, the
ANCOVA looks for differences in adjusted means (i.e., adjusted for the covariate).
Example
o If one of the comparison groups had an above-average mean on the control
variable (as compared with the other groups in the study), then that group’s
mean score on the dependent variable will be lowered.

o In contrast, any group that has a below average mean on the covariate will have
its mean score on the dependent variable raised. The degree to which any
group’s mean score on the dependent variable is adjusted depends on how far
above or below average that group stands on the control variable.

o By adjusting the mean scores on the dependent variable in this fashion,


ANCOVA provides the best estimates of how the comparison groups would have
performed if they had all possessed identical (statistically equivalent) means on
the control variable(s).
Note:
• ANCOVA is usually used when there are differences between your baseline groups (Senn, 1994;
Overall, 1993). It removes any effect of covariates.
• The technique is also common in non-experimental research (e.g. surveys) and for quasi
experiments (when study participants can’t be assigned randomly).

• In ANCOVA, you can have more than one covariate the covariates are measured on a continuous
scale

• In addition, the "one-way" part of one-way ANCOVA refers to the number of independent
variables.

Example
o If you have two independent variables rather than one, you could run a two-way
ANCOVA.
o If you have three independent variables rather than one, you could run a three-way
ANCOVA.
o Etc.
What assumptions does the test make and what will happen if the assumption is violated?

Assumption #1: Variables Assumptions :


▪ Independent variable : Categorical (two or more categories/level, independent groups)
▪ Dependent variable : Continuous (ratio or interval)
▪ Covariate : Continuous (ratio or interval)

Assumption #2: You should have independence of observations, which means that there is no relationship between the
observations in each group or between the groups themselves. For example, there must be different participants in
each group with no participant being in more than one group. This is more of a study design issue than something
you can test for, but it is an important assumption of a one-way ANCOVA. If your study fails this assumption, you will
need to use another statistical test instead of a one-way ANCOVA (e.g., a repeated measures design).

Assumption #3: There should be no significant outliers. Outliers may have a strong influence over the fitted
slope and intercept, giving a poor fit to the bulk of the data points. Outliers tend to increase the
estimate of residual variance, lowering the chance of rejecting the null hypothesis. They may be due
to recording errors, which may be correctable, or they may be due to the Y values not all being
sampled from the same population. Apparent outliers may also be due to the Y values being from the
same, but nonnormal, population.
Assumption #4: The dependent variable is approximately normally distributed for each category of the
independent variable. However, like ANOVA, ANCOVA is robust test against violation of the normality
assumption provided that the sample size is not small.

Assumption #5: The covariate should be linearly related to the dependent variable at each level of the independent
variable. You can test this assumption in SPSS Statistics by plotting a grouped scatterplot of the covariate,
post-test scores of the dependent variable and independent variable. If there is no linear relation between X and
Y, then the analysis of covariance offers no improvement over the one-way analysis of variance in detecting
differences between the group means.

Assumption #6: There needs to be homogeneity of variances. You can test this assumption in SPSS Statistics using
Levene's test for homogeneity of variances. This value can be found in the ANCOVA output. To the extent that this
assumption is violated and the group sample sizes differ, the validity of the results of the one-way ANCOVA
should be questioned. Even with equal sample sizes, the results of the standard post hoc tests should be
mistrusted if the population variances differ.
Assumption #7: There needs to be homogeneity of regression slopes, which means that there is no interaction
between the covariate and the independent variable. If this assumption is violated ,drop the covariate from the
model so that you’re not violating the assumptions of ANCOVA and run a one-way ANOVA. This seems to be the
popular option among most critics.

You can check assumptions #3, #4, #5, #6, #7 using SPSS Statistics. Before doing this, you should make sure
that your data meets assumptions #1, #2, although you don't need SPSS Statistics to do this. Remember that if you do not
run the statistical tests on these assumptions correctly, the results you get when running a one-way ANCOVA might not be
valid.
Example 2 6 15 2

3 3

The researcher wanted to 4 2 Posttes Case 7 1 7 2 4 3 5 4 6 5 5 6 4 7 3

know if the two different Dependent Variable : 5 4


8 5 9 5 10 6 11 4 12 5 13 4 14 4 15

methods of teaching Posttest of Math Exper


6 3 ime
(MethodA and Method B) Performance 7 1
ntal

affect students’ Independent Variable : Group

performance in math. She Group (Methods of


8 2 Pretest Posttes

gavea pretest on math teaching) Covariate : 9 3 4 6

performance prior to the Pretest 10 5 3 5


Contro
treatment and then l
11 4 2 5

posttest after the Group 12 2 4 5


treatment. Students’ Case Pretest 13 4 2 4
score is shown at the 1 4 14 3 5 4
right.
1 5 2 3 3 4

2 4 4 6 4 3

5 4 5 3 3 5

TESTING THE ASSUMPTION # 3 (No Outliers) and ASSUMPTION # 4 (Normality)

To conduct this test, follow these steps:


1. Click Analyze, click Descriptive Statistics, and then click Explore 2. Click the
dependent variable, then click ▶ to move it to the Dependent list box 3. Click the
independent variable, then click ▶to move it to the Factor list box 4. Select Both
5. Click Statistics
6. Select Descriptives and Outliers
7. Click Continue. This will bring you back to the Explore screen…
8. Click Plots
9. Select Normality plots with tests
10.Click Continue. This will bring you back to the Explore screen…
11.Click OK
SPSS Output
Since the p-value (Sig.) in both groups are greater This show that there is no outliers. Hence,
than is greater than �� = .05, then the assumption #3 is met.
assumption #4 is met.
TESTING THE ASSUMPTION #5 (LINEARITY)
Click OK
We can interpret either the left side or right side.
For this example, we can see that pretest and
posttest in the control group is linearly related,
but not linearly related for experimental group.
Therefor Assumption #5 is not met. But, for the
and the Covariate to Matrix Variable.
Click Matrix Scatter, Move the IV to Rows
Move the DV purpose of illustration we will proceed.
then Click Define

Examples of
Linear and
non-Linear Plots
TESTING THE ASSUMPTION # 7 (HOMOGENEITY-OF-REGRESSION SLOPES)
To conduct this test, follow these steps:
1. Click Analyze, click General Linear Model, and then click Univariate
2. Click the dependent variable, then click Type equation here.to move it to the Dependent Variable box
3. Click the independent variable, then click ▶to move it to the Fixed Factor(s) box

4. Click the covariate, then click ▶to move it to the Covariate(s) box
5. Click Model
6. Click Custom under Specify Model
7. Holding down the Ctrl key, click the independent variable (IV) and the covariate (Cov) in the Factors & Covariates
box. Check to see that the default option Interaction is specified in the drop-down menu in the Build Term(s) box.
If it is not, select it

8. Click ▶ and the IV*Cov should now appear in the Model box
9. Click Continue. This will bring you back to the Univariate screen
10. Click OK

SPSS Output
• If p-value (Sig) is greater than ��, then the assumption #7 is met
• If p-value (Sig) is less than ��, then the assumption #7 is NOT met

In this example, we can see that Assumption #7 is NOT met. But, for
the purpose of illustration we will proceed.
CONDUCTING THE ONE-WAY ANCOVA
and
TESTING THE ASSUMPTION # 6 (HOMOGENEITY OF VARIANCES)

1. Click Analyze, click General Linear Model, and then click Univariate
2. Click Reset
If you have not exited SPSS – the prior commands will still be shown. As a precaution for avoiding possible errors
– click the reset key and begin the procedure from the initial starting point
3. Click the dependent variable, then click ▶ to move it to the Dependent Variable box
4. Click the independent variable, then click ▶ to move it to the Fixed Factor(s) box
5. Click the covariate, then click ▶ to move it to the Covariate(s) box
6. Click on Options
7. Select Descriptive statistics, Estimates of effect size, and Homogeneity tests in the Display box
8. Click Continue
9. This will bring you back to the Univariate screen…
10. Click on Model
11. Select Full factorial
12. Click Continue
13. This will bring you back to the Univariate screen – click OK

SPSS
Output
This is the Unadjusted means

Interpret this for Assumption #6


• If p-value (Sig) is greater than �� = .05, then the assumption #6 is met.
• If p-value (Sig) is less than �� = .05, then the assumption #6 is NOT
met.

In this example, we can see that Assumption #6 is met.

This is the ANCOVA table, and we are interested if there is a


significant difference between groups.

In this example, since the p-value (Sig) is greater


than ��=.05, we conclude that there is no significant difference
between the control and the experimental group.

This is the Adjusted means


ANCOVA - APA Style Reporting
For reporting our ANCOVA, we'll first present descriptive statistics for
•our covariate;
•our dependent variable (unadjusted); The standard deviation (SD) measures the amount of
•our dependent variable (adjusted for the covariate). variability, or dispersion, from the individual data values to the
mean, while the standard error of the mean (SEM) measures
how far the sample mean of the data is likely to be from the
true population mean. The SEM is always smaller than the SD.

Table X

Unadjusted and Covariate Adjusted Descriptive Statistics for Mathematics Performance

Pretest Posttest (Unadjusted) Posttest (Adjusted)


Group (Treatment) n
Control 15 Experimental 15
Mean SD Mean SD Mean SD 3.20 1.32 4.93 1.16 4.94 1.01 3.27 1.28 4.40 .99 4.39 1.01

Explain what happened to the Mean and Standard Deviation.


Replace the SD column with SEM if the subjects/cases are randomly
assigned to treatments

SE is standard Error: ������


• Second, we'll present a standard ANOVA table. But, we need to do corrections - delete rows not included in the
summary for the One-way ANCOVA
• The summary table produced in SPSS contains several additional lines. Below is the SPSS output with the applicable
lines marked through to reflect the above table.
Tests of Between-Subjects Effects
Dependent Variable: Posttest

Source Type III Sum df Mean Square F Sig. Partial Eta


of Squares Squared

Corrected 6.637a 2 3.318 3.196 .057 .191


Model 52.966 1 52.966 51.020 .000 .654
Intercept 4.503 1 4.503 4.338 .047 .138
Pretest 2.299 1 2.299 2.215 .148 .076
Group 28.030 27 1.038
Error 688.000 30
Total 34.667 29
Corrected Total

a. R Squared = .191 (Adjusted R Squared = .132)

• Thus, the second table would be.. Pretest as Covariate Type III Sum of

Table Y
Analysis of Covariance for Mathematics Performance by Group with
This is the measure of effect size.
Do not report if P-value is
Squares df Mean Square F Sig. Partial Eta Squared NOT significant. You can
Source
delete this column. Pretest 4.50 1 4.50 4.34 .047 .14 Group 2.30 1 2.30 2.22 .148 .076 Error (Within) 28.03 27 1.04

teaching method
Total 34.67 29

We need to report the result of the assumption testing here, From assumptions 3 to 7, but since many of the assumptions
were violated, the ANCOVA is not appropriate for this example. However, we continued the ANCOVA analysis for the purpose of
illustration only.

The covariate, pretest, was significantly related to posttest, F(1, 27) =4.34, p =.047. However, there was a non significant
effect of teaching method on posttest score after controlling for the effect of pretest score, F(1, 27) =2.22, p = .15.

You might also like