Download as pptx, pdf, or txt
Download as pptx, pdf, or txt
You are on page 1of 15

Session / Week 8

Introduction to Confirmatory Factor Analysis


(CFA)
Key Points
01 Introduction to confirmatory factor analysis (CFA )

02 Difference between EFA and CFA

03 Developing a assessing a measurement model for CFA


Introduction to CFA
• CFA is applied in the scenario where we theoretically know
that certain number of observed variables (called
‘indicators’) belong to one latent variable (LV).

• CFA produces a measurement model which includes at least


two LVs, observed variables, disturbances, parameter, and
covariances (i.e. standardized correlations).
Introduction to CFA
• CFA works on a confirmatory approach in which the aim is
to ascertain the validity and reliability of each of the LVs
used in the measurement model.

• Three-indicator rule: Each LV should have at least three


indicators for model identification
• Please refer to Hair et al. (2019, p. 660).
Difference between EFA and CFA
• EFA is used when we don’t know which indicator belongs to which LV. In
other words, there is no ‘a-priori’ or theory available. For instance, you de-
velop a research instrument (30 items) to measure students’ level of anxiety
due to online teaching system. Since you don’t know how many LVs there
would be and which specific item(s) would load onto which specific LV. In
short, an exploratory approach is needed to obtain a new data structure in a
particular social research context.
• Remember! During EFA when you restrict SPSS to extract certain number of
factor, it is not CFA.
Difference between EFA and CFA
• “In contrast to EFA, confirmatory factor analysis (CFA) is appropriately used
when the researcher has some knowledge of the underlying latent variable
structure” (Byrne, 2016, p. 6).
• Following the above example, assume that 3 LVs are emerged after EFA hav-
ing 10 items each. If somebody else intends to apply the same instrument in
another context, he needs to apply CFA to confirm whether the three LVs with
10 items each are also observed in another context too.
• In contrast to EFA, CFA generally requires model-fit indices and residual co-
variances.
Developing a assessing a measurement model for CFA

• Total no. Of LVs = 5


• Total no. of indicators = 20
• Total no. of disturbances = 20
• No. of parameters (NPAR) = ?

Source: Adopted with permission from Adil, Ab Hamid, and Waqas


(2020, p. 135).
Measurement (CFA) Model

Source: Adopted from Byrne (2016, p. 14)


Types of Measurement Model

Source: Adopted from http://statwiki.kolobkreations.com/index.php?


title=Exploratory_Factor_Analysis#Formative_vs._Reflective
Assessing a Measurement Model
Indicator Reliability (outer loadings) Model-Fit Indices
Convergent validity using AVE a) Badness of fit indices
Internal consistency reliability using CR CMIN/DF (should be < 3.0)
SRMR (should be < 0.08)
Discriminant Validity:
RMSEA (should be< 0.08)
– Fornell and Larcker (1981) criterion
b) Goodness of fit indices
– Cross loadings GFI and AGFI (should be >0.90)
– Hetrotrait-monotrait (HTMT) ratio of NFI, TLI, CFI (should be >0.90)
correlations (applicable in AMOS c) Index for assessing generalizability
version 24 and above) ECVI (should be close to 1)

Please refer to Table 3 of Hair, Ringle, and Sarstedt (2011, p. 145)


Reliability and Validity

Reliability refers to the extent to which your data collection techniques or analysis procedures will
yield consistent findings. It can be assessed by posing the following three questions (Easterby-Smith
et al., 2002:53):

1 Will the measures yield the same results on other occasions?


2 Will similar observations be reached by other observers?
3 Is there transparency in how sense was made from the raw data?

Business Research Methods, Mark Saunders, 4th Edition, Page 177, Chapter 5
Reliability and Validity
Validity is concerned with whether the findings are really about what they appear to be about. Is the relationship be-
tween two variables a causal relationship? For example, in a study of an electronics factory we found that employees’
failure

to look at new product displaysnot by employee apathy but by lack of opportunity (the displays were

located in a part of the factory that employees rarely visited).

This potential lack of validity in the conclusions was minimized by a research design that built in the opportunity
for focus groups after the questionnaire results had been analyzed.

Robson (2002) has also charted the threats to validity, which provides a useful way of thinking about this important
topic
Business Research Methods, Mark Saunders, 4th Edition, Page 177, Chapter 5
Validity
Construct Validity
 Construct validity refers to the degree to which inferences can legitimately be made from the opera-
tionalizations in your study to the theoretical constructs on which those operationalizations were
based.
 Like external validity, construct validity is related to generalizing. But, where external validity in-
volves generalizing from your study context to other people, places or times, construct validity in-
volves generalizing from your program or measures to the concept of your program or measures.
 You might think of construct validity as a “labeling” issue. When you implement a program that you
call a “Head Start” program, is your label an accurate one? When you measure what you term “self es-
teem” is that what you were really measuring?
Convergent & Discriminant Validity
 Convergent and discriminant validity are both considered subcategories or subtypes of construct validity. The impor-
tant thing to recognize is that they work together if you can demonstrate that you have evidence for both convergent
and discriminant validity, then you’ve by definition demonstrated that you have evidence for construct validity. I find
it easiest to think about convergent and discriminant validity as two inter-locking propositions. In simple words I
would describe what they are doing as follows:
 It measures of constructs that theoretically should be related to each other are, in fact, observed to be related to each
other (that is, you should be able to show a correspondence or convergence between similar constructs)

and
 measures of constructs that theoretically should not be related to each other are, in fact, observed to not be related to
each other (that is, you should be able to discriminate between dissimilar constructs)

You might also like