Professional Documents
Culture Documents
CFA Interpretation
CFA Interpretation
CFA Interpretation
(Study 2)
4.16 SECTION D
CFA is frequently used to develop and enhance measuring instruments, assess the
validity of structures, detect method influences, and assess factor invariance across time
and between groups (Brown, 2006). As a result, CFA is an excellent instrument for
investigating issues that most psychologists are interested in. The uses of CFA have been
on the rise since the late 1990s, with the majority of applications in scale development
and build validation (Brown, 2006; Russell, 2002).
CFA is a member of the structural equation models (SEM) family and is used to
assess model validity in route or structural analysis (Brown, 2006; MacCallum& Austin,
2000). Researchers usually assess the measurement model before examining the
structural model when using SEM (if the measured variables properly represent the
required structures or components). "It is undesirable to link buildings inside a SEM
model if there is no further attention to the attributes given in the model," Thompson
(2004) writes. Problems with SEM models are frequently caused by faults in the
measurement model, which can be identified using CFA (Brown, 2006).
108 | P a g e
4.16.2 IDENTIFICATION OF MODEL
The table 40depicts that they are 54 different sample moments and 30 different
parameters to estimate in the proposed model. 30 unique parameters are subtracted from
54 distinct sample moments. There will be a total of 24 degrees of freedom available.
The degree of liberty is a positive number. As a result, the model has become over
identified.
109 | P a g e
or more than 0.70 to be considered acceptable (Nunnally& Bernstein, 1994). In the
teaching efficacy model, Cronbach‘s alpha statistics was used to check the reliability of
the research instrument (all constructs).
Instruction 0.821
Technology 0.796
Inclusion 0.797
Another way to examine the measurement model's fitness is to use this method.
The overall variance of the indicators accounted for by the latent constructs is reflected
in the variance extracted measure (Hair et al., 2006). When the indicators are true
representatives of the latent construct, the extracted values have a higher variance. Table
8 reveals that all components significantly above the required level of 0.50 or 50%
variance extracted measurements, ranging from 0.56 to 0.60. As a result, the indicators
are sufficient for all constructions in terms of how the measurement model is presently
stated.
Instruction 0.605
Technology 0.567
Inclusion 0.568
110 | P a g e
4.16.5 MULTIPLE SHARE VARIANCES (MSV)
Table: 43 Multiple share variance (MSV) between the Instruction, Technology, and
Inclusion components
Instruction 0.282
Technology 0.282
Inclusion 0.219
The average share variance amongst the latent constructs is called average share
variance. It is the difference in share variance between latent construct 1 and 2, latent
construct 2 and 3, and latent construct 3 and 1. The extracted average variance (AVE)
should be higher than the average share variance (ASV).
Table: 44 Average share variance (ASV) between the Instruction, Technology, and
Inclusion components is shown in the table below.
Instruction 0.831
Technology 0.808
Inclusion 0.809
111 | P a g e
4.16.7 CONVERGENT VALIDITY
It refers to the fraction of measurement rates that are high, medium, or low in
comparison to ratings obtained from another measure to assess the same structure
(Messick, 1995). When the results obtained are highly associated with two independent
instruments that have the same idea, it is determined. It is the extent to which two
measured variables have an independent relationship with one another [Straub, 1989].
When measurements should theoretically be connected, it is the true general agreement
between evaluations gathered separately from one another [Campbell, 1959].
112 | P a g e
4.16.8 DISCRIMINANT VALIDITY (CFA)
When two variables are predicted to be uncorrelated on the basis of theory, but
the results obtained by measuring them are empirical, that is, the differentiation between
one group and another, it is determined. The ostensibly unconnected relationship
between measurements does not exist [Messick, 1995; Sperry, 2004]. For example, if
kids who drop out of high school are tested higher than students who finish their
schooling [Campbell, 1959], surveys designed to predict future high school dropouts
would be discriminatory in their usefulness.
Table: 48Value of multiple share variance and the average variance extracted
Table: 49 Value of the Average variance shares and the Average variance extracted
113 | P a g e
Table: 50 Correlation among latent construct
Factor Estimate
Inclusion <--> Instruction 0.466
Technology <--> Instruction 0.525
Technology <--> Inclusion 0.295
The correlations between all of the model's latent constructs are shown in the
table. All of the constructs are highly associated with one another, and none of the
correlations are more than 0.525. As a result, the risk of multicollinearity is low. Values
greater than 0.80 may indicate a concern and values greater than 0.90 should always be
investigated (Hair et al., 2006).
(Variables)
Instruction Efficacy ST 23 .706
ST 24 .788
ST 25 .857
Technology Efficacy ST 36 .664
ST 37 .803
ST 38 .785
Inclusion Efficacy ST 17 .714
ST 18 .819
ST 20 .722
114 | P a g e
Fig. 4Results of CFA
115 | P a g e
Table: 52 Overall Model Fit
Normed Fit Index Between 0.90 to 0.95= marginal NFI = 0.927 Marginal
(NFI) > 0.95 Good
Incremental Fit Index Acceptable value: ≥0.90 IFI = 0.961 Good
(IFI)
3.Noncentrality Based Measures
Root mean square error < 0.5 good, < 0.10 moderate, > RMSEA = 0.075 Moderate
of approximation 0.10 bad
(RMSEA)
Comparative Fit Index Acceptable value: ≥0.95 great, > CFI = 0.959 Great
(CFI) 0.90traditional
Table: 52 in different indices from all four types of goodness of fit measures are listed in
the table. These findings show that the proposed model is the best fit for the data, as all
measurements were within acceptable bounds, and the model was approved.
116 | P a g e