CFA Interpretation

You might also like

Download as pdf or txt
Download as pdf or txt
You are on page 1of 9

Confirmatory Factor Analysis (Inferential Statistic)

(Study 2)
4.16 SECTION D

4.16.1 CONFIRMATORY FACTOR ANALYSIS (CFA)

CFA is frequently used to develop and enhance measuring instruments, assess the
validity of structures, detect method influences, and assess factor invariance across time
and between groups (Brown, 2006). As a result, CFA is an excellent instrument for
investigating issues that most psychologists are interested in. The uses of CFA have been
on the rise since the late 1990s, with the majority of applications in scale development
and build validation (Brown, 2006; Russell, 2002).

CFA is a member of the structural equation models (SEM) family and is used to
assess model validity in route or structural analysis (Brown, 2006; MacCallum& Austin,
2000). Researchers usually assess the measurement model before examining the
structural model when using SEM (if the measured variables properly represent the
required structures or components). "It is undesirable to link buildings inside a SEM
model if there is no further attention to the attributes given in the model," Thompson
(2004) writes. Problems with SEM models are frequently caused by faults in the
measurement model, which can be identified using CFA (Brown, 2006).

Structural Equation Modelling (SEM) is a comprehensive statistical technique for


testing hypotheses about visible and latent variable interactions (SEM). Structural
Equation Modelling (SEM) is a confirmation approach that provides comprehensive
ways for evaluating and changing both the measurement and structural models. The
unilaterality, validity, and reliability of a measurement model can all be estimated.

Researchers use the teaching efficacy of physical education teachers as a research


topic to tests to evaluate the validity and reliability of a model of measurement. This
research focuses on three latent variables: inclusion efficacy, instruction efficacy, and
technology efficacy.

108 | P a g e
4.16.2 IDENTIFICATION OF MODEL

Model specification, identification, parameter estimation, model assessment, and


model change are the five logical phases in SEM (Kline 2010; Hoyle 2011; Byrne 2013).
One of the five logical steps in determining whether a model is over-identified, just-
identified, or under-identified is to identify by the model identification. Only just
identified or over-identified model's coefficients can be computed. Models should be
over-identified or exact identified if equations are bigger than predicted parameters.
Models that have been identified are not taken into account.

Table: 40 Calculation of degree of freedom from different sample moments and


different parameters to be evaluated is shown in the table.

Computation of degrees of freedom (Default model)

Number of distinct sample moments: 54

Number of distinct parameters to be estimated: 30

Degrees of freedom (54 - 30): 24

The table 40depicts that they are 54 different sample moments and 30 different
parameters to estimate in the proposed model. 30 unique parameters are subtracted from
54 distinct sample moments. There will be a total of 24 degrees of freedom available.
The degree of liberty is a positive number. As a result, the model has become over
identified.

4.16.3 COMPOSITE RELIABILITY

A measure's reliability, according to Sekaran (2006), is "an indication of stability


and consistency." The internal consistency approach, which involves computing
Cronbach's alpha, is the most popular method for verifying the reliability of research
instruments. Internal consistency reliability verifies that respondents' responses to all of
the items in a measure are consistent, and that the items are independent measurements
of the same idea (Sekaran, 2006). Cheng (2001) stated that if an indicator/measure has
severely low internal consistency, it should be removed. Cronbach alpha must be equal

109 | P a g e
or more than 0.70 to be considered acceptable (Nunnally& Bernstein, 1994). In the
teaching efficacy model, Cronbach‘s alpha statistics was used to check the reliability of
the research instrument (all constructs).

Table 41Composite reliability (CR) of the Instruction, Technology, and Inclusion


component

FACTOR COMPOSITE RELIABILITY

Instruction 0.821

Technology 0.796

Inclusion 0.797

4.16.4 AVERAGE VARIANCE EXTRACTED (AVE)

Another way to examine the measurement model's fitness is to use this method.
The overall variance of the indicators accounted for by the latent constructs is reflected
in the variance extracted measure (Hair et al., 2006). When the indicators are true
representatives of the latent construct, the extracted values have a higher variance. Table
8 reveals that all components significantly above the required level of 0.50 or 50%
variance extracted measurements, ranging from 0.56 to 0.60. As a result, the indicators
are sufficient for all constructions in terms of how the measurement model is presently
stated.

Table: 42 Average Variance Extracted value of Instruction, Technology and


Inclusion.

FACTOR AVERAGE VARIANCE EXTRACTED

Instruction 0.605

Technology 0.567

Inclusion 0.568

110 | P a g e
4.16.5 MULTIPLE SHARE VARIANCES (MSV)

The covariance share amongst latent constructs is known as multiple share


variances. It is the difference in share variance between latent construct 1 and 2, latent
construct 2 and 3, and latent construct 3 and 1. The extracted average variance (AVE)
should be higher than the multiple share variances (MSV).

Table: 43 Multiple share variance (MSV) between the Instruction, Technology, and
Inclusion components

FACTOR MULTIPLE SHARE VARIANCE (MSV)

Instruction 0.282

Technology 0.282

Inclusion 0.219

4.16.6 AVERAGE SHARE VARIANCE (ASV)

The average share variance amongst the latent constructs is called average share
variance. It is the difference in share variance between latent construct 1 and 2, latent
construct 2 and 3, and latent construct 3 and 1. The extracted average variance (AVE)
should be higher than the average share variance (ASV).

Table: 44 Average share variance (ASV) between the Instruction, Technology, and
Inclusion components is shown in the table below.

FACTOR AVERAGE SHARE VARIANCE (ASV)

Instruction 0.831

Technology 0.808

Inclusion 0.809

111 | P a g e
4.16.7 CONVERGENT VALIDITY

It refers to the fraction of measurement rates that are high, medium, or low in
comparison to ratings obtained from another measure to assess the same structure
(Messick, 1995). When the results obtained are highly associated with two independent
instruments that have the same idea, it is determined. It is the extent to which two
measured variables have an independent relationship with one another [Straub, 1989].
When measurements should theoretically be connected, it is the true general agreement
between evaluations gathered separately from one another [Campbell, 1959].

Table: 45Value of average variance extracted (AVE) and composite reliability

FACTOR AVE CR LOGIC RESULT

Instruction 0.605 0.821 CR >AVE ACCEPT

Technology 0.567 0.796 CR >AVE ACCEPT

Inclusion 0.568 0.797 CR >AVE ACCEPT

Table: 46 Average Variance Extracted (AVE)

FACTOR AVE LOGIC RESULT

Instruction 0.605 AVE > 0.5 ACCEPT

Technology 0.567 AVE > 0.5 ACCEPT

Inclusion 0.568 AVE > 0.5 ACCEPT

Table: 47 Evaluating composite reliability of Instruction, Technology, and Inclusion


efficacy must have greater than 0.7.

FACTOR CR LOGIC RESULT

Instruction 0.821 CR > 0.7 ACCEPT

Technology 0.796 CR > 0.7 ACCEPT

Inclusion 0.797 CR > 0.7 ACCEPT

112 | P a g e
4.16.8 DISCRIMINANT VALIDITY (CFA)

When two variables are predicted to be uncorrelated on the basis of theory, but
the results obtained by measuring them are empirical, that is, the differentiation between
one group and another, it is determined. The ostensibly unconnected relationship
between measurements does not exist [Messick, 1995; Sperry, 2004]. For example, if
kids who drop out of high school are tested higher than students who finish their
schooling [Campbell, 1959], surveys designed to predict future high school dropouts
would be discriminatory in their usefulness.

Table: 48Value of multiple share variance and the average variance extracted

FACTOR AVE MSV LOGIC RESULT

Instruction 0.605 0.282 AVE > MSV ACCEPT

Technology 0.567 0.282 AVE > MSV ACCEPT

Inclusion 0.568 0.219 AVE > MSV ACCEPT

Table: 49 Value of the Average variance shares and the Average variance extracted

FACTOR AVE AVS LOGIC RESULT

Instruction 0.605 0.57 AVE > AVS ACCEPT

Technology 0.567 0.168 AVE > AVS ACCEPT

Inclusion 0.568 0.05 AVE > AVS ACCEPT

4.16.8.1 CORRELATION AMONG CONSTRUCTS


If the correlation between two variables/constructs surpasses the value of 1.00 in the
standardised solution, or even if the variables are substantially connected, it is considered
an offending estimate. Eliminating one of the dimensions or ensuring true discriminant
validity among the constructs are two options for dealing with such offending estimates
(Hair et al., 2006).

113 | P a g e
Table: 50 Correlation among latent construct
Factor Estimate
Inclusion <--> Instruction 0.466
Technology <--> Instruction 0.525
Technology <--> Inclusion 0.295

The correlations between all of the model's latent constructs are shown in the
table. All of the constructs are highly associated with one another, and none of the
correlations are more than 0.525. As a result, the risk of multicollinearity is low. Values
greater than 0.80 may indicate a concern and values greater than 0.90 should always be
investigated (Hair et al., 2006).

4.16.8.2 STANDARDIZED FACTOR LOADING


The standardised factor loading that exceeds or is extremely close to 1.00 is
another form of offending estimate in CFA (Hair et al., 2006). Hair et al. (2006) and
Cheng (2001) recommend deleting objectionable variables or selecting a small value
(0.005) for related error variance to guarantee that loading is less than 1.0 to deal with
offending estimates and create the best fitted measurement model. According to Segars
and Grover (1993), problematic indicators should be removed one at a time, as removing
one indication or measure can have an immediate impact on other elements of the model.
After that, the model must be re-estimated.

Table: 51Standardised regression weights of constructs 1, 2, and 3


Standardized Regression Weights or Construct Loadings
Construct Indicator Standardized Structural Coefficient

(Variables)
Instruction Efficacy ST 23 .706
ST 24 .788
ST 25 .857
Technology Efficacy ST 36 .664
ST 37 .803
ST 38 .785
Inclusion Efficacy ST 17 .714
ST 18 .819
ST 20 .722

114 | P a g e
Fig. 4Results of CFA

4.17 OVERALL MODEL FIT (GOODNESS OF FIT)

The evaluation of structural model goodness of fit measurements is also known


as overall structural model fit. The researcher must decide whether to reject or accept the
structural model being evaluated based on the goodness of fit measures (Hair et al, 2006;
Reisinger&Mavondo, 2007). To evaluate the structural model, there are four primary
types of goodness of fit measures. These are the following:

115 | P a g e
Table: 52 Overall Model Fit

1. ABSOLUTE FIT MEASURE


GOODNESS OF FIT ACCEPTED VALUE CALCULATION ADEQUACY
MEASURE

LIKELIHOOD RATIO X2= 49.852 Good


CHI-SQUARE
STATISTIC
GOODNESS OF FIT > 0.95 GFI= 0.948 Moderate
INDEX

2. INCREMENTAL FIT MEASURE


TUCKER-LEWIS Acceptable value: ≥0.90 TLI or NNFI = Good
INDEX 0.924

Normed Fit Index Between 0.90 to 0.95= marginal NFI = 0.927 Marginal
(NFI) > 0.95 Good
Incremental Fit Index Acceptable value: ≥0.90 IFI = 0.961 Good
(IFI)
3.Noncentrality Based Measures
Root mean square error < 0.5 good, < 0.10 moderate, > RMSEA = 0.075 Moderate
of approximation 0.10 bad
(RMSEA)
Comparative Fit Index Acceptable value: ≥0.95 great, > CFI = 0.959 Great
(CFI) 0.90traditional

4.Parsimonious Fit Measures


Normed chi-square < 3 good, < 5 sometimes Normed x2/ d.f = Good
permissible 2.077

Fit Index (PNFI) Acceptable value: ≥ 0.90 RFI = 0.863 Moderate


Relative Fit Index
(RFI)
* Source Hair et al. (2006)

Table: 52 in different indices from all four types of goodness of fit measures are listed in
the table. These findings show that the proposed model is the best fit for the data, as all
measurements were within acceptable bounds, and the model was approved.

116 | P a g e

You might also like