Download as docx, pdf, or txt
Download as docx, pdf, or txt
You are on page 1of 4

4.1.

Reliability and Validity Analysis


4.1.1. Reliability Analysis

Reliability analysis is a statistical procedure utilized to evaluate the consistency of a


measurement instrument or scale (Hajjar, 2018). It establishes the degree to which a scale's items
or questions produce reliable results across multiple measurements or testing scenarios. The
main goal of reliability analysis is to guarantee that the measurement tool used in a study
consistently measures the construct (variable) it is supposed to measure, hence reducing
measurement error and improving the accuracy of the data obtained.

Cronbach's alpha coefficient, which has a range from 0 to 1 with higher values indicating
stronger internal consistency, is a widely used indicator of internal consistency (George &
Mallery, 2003). For research purposes, a Cronbach's alpha value of 0.70 or higher is typically
acceptable (Cortina, 1993).

In the context of current study, conducting reliability analysis using Cronbach's alpha is essential
to ensure that the measurement scales used to capture constructs like Corporate Social
Responsibility (CSR), Electronic Word-of-Mouth (E-WOM), and brand equity are reliable and
consistent. The given table 1 presents the results of reliability statistics of the study’s construct.

Table 1: Reliability Statistics

Cronbach's Alpha N of Items


Community Support 0.900 5
Environmental Support 0.867 3
Employee Relation 0.886 4
Product and Service Orientation 0.706 2
EWOM 0.844 3
Brand Equity 0.796 4
Overall 0.862 21

The results of reliability analysis revealed strong internal consistency for the
measurement scales used in the study, as indicated by the high Cronbach's alpha
coefficients: Community Support (α = 0.900), Environmental Support (α = 0.867),
Employee Relation (α = 0.886), Product and Service Orientation (α = 0.706), E-WOM (α
= 0.844), Brand Equity (α = 0.796), and an overall alpha of 0.862. These coefficients
surpass the commonly accepted threshold value of 0.70, signifying the scales' reliability
in consistently measuring their respective constructs (Hair et al., 2022; Hajjar, 2018).

4.1.2. Validity Analysis

Validity is a critical criterion that ensures the accuracy of measurement instruments or scales in
reflecting the intended concept under investigation (Hair et al., 2022). It ensures the integrity and
truthfulness of research outcomes (Bell et al., 2018). The validation process encompasses various
types, including content (face validity), construct validity (convergent and discriminant validity),
and criterion-related validity (concurrent and predictive validity) (Cooper & Schindler, 2014;
Sekaran & Bougie, 2016). In this study, face validity and construct validity are employed to
ensure the accuracy of the measurement instrument. The instrument's face validity determines if
experts agree that it measures what its name implies (Sekaran & Bougie, 2016). Since the scales
used in the present study have been validated by domain experts including Washburn & Plank
(2002), Mandhachitara & Poolthong (2008), and Wei et al. (2021), it ensures the face validity.

Additionally, construct validity is rigorously assessed through exploratory factor analysis (EFA),
which seeks underlying structures among variables (Tukey, 1980). To this end, Principal
Component Analysis (PCA) with promax rotation is chosen for its efficiency with larger
datasets, as suggested by Gaskin and Happell (2014). Before applying PCA, two prerequisites of
factor analysis, namely Kaiser-Meyer-Olkin (KMO) measure and Bartlett's Test of Sphericity,
are examined to determine the suitability of factor analysis (Hair et al., 2018).

4.1.2.1. KMO and Bartlett's Test of Sphericity

The Kaiser-Meyer-Olkin (KMO) measure of sampling adequacy and Bartlett's Test of Sphericity
are both integral parts of the validation process for factor analysis (Field, 2013). Factor analysis
is employed to identify latent constructs or underlying factors within a dataset (Hair et al., 2022).
The KMO measure assesses that whether the data is suitable for factor analysis by evaluating the
common variance among variables, indicating whether the dataset is conducive to extracting
meaningful factors (Field, 2013). In this context, a high KMO value (close to 1) signifies that the
variables share enough interrelatedness to extract meaningful latent factors (Hair et al., 2022).
Conversely, Bartlett's Test of Sphericity investigates whether the correlations between variables
are significant enough to proceed with factor analysis (Tabachnick & Fidell, 2013). A p-value
below the significance level (typically 0.05) in this test suggests substantial correlations among
variables, thereby validating the appropriateness of factor analysis (Field, 2013). In summary, the
KMO measure and Bartlett's Test collectively guide the decision to proceed with factor analysis
by ensuring that the dataset exhibits sufficient interrelatedness and significant correlations
among variables.

Table 2: KMO and Bartlett's Test

Kaiser-Meyer-Olkin Measure of
0.812
Sampling Adequacy.
Approx. Chi-Square 2830.766
Bartlett's Test of Sphericity Df 210
Sig. 0.000

In the context of the present study, the KMO value of 0.812 (presented in table 2 above) suggests
that the dataset is suitable for factor analysis, indicating that the variables are correlated enough
to extract meaningful factors. Additionally, the low p-value of 0.000 for Bartlett's Test of
Sphericity indicates significant correlations between variables, further supporting the suitability
of the data for factor analysis. These results suggest that the dataset used in the study is
appropriate for conducting exploratory factor analysis, validating the use of this technique to
identify underlying constructs related to the study's variables.

4.1.2.2. Factor Loadings

In the context of factor analysis, the concept of factor loading is crucial. It plays a significant role
in assessing both convergent and discriminant validity. Among these, convergent validity refers
to the degree to which different items measures the same construct exhibiting correlations with
one another (Cooper & Schindler, 2014). In this context, factor loading refers to how strongly a
latent factor that underlies an observable variable (indicator) in a factor analysis model is related
to that latent component. Basically, it measures how well a variable that is observed aligns with a
given factor. An association between the variable and the latent component it relates to is more
strongly associated with higher factor loadings.
In assessing the convergent validity, researchers often rely on criteria such as having components
with a minimum eigenvalue of 1 and factor loadings of at least 0.50 for items loading on
constructs (Hair et al., 2022). When these conditions are met, it signifies that the instrument
demonstrates convergent validity. Moreover, to ascertain discriminant validity through
exploratory factor analysis (EFA), analysts can examine the pattern matrix. This step ensures that
each variable displays significant loadings only on its relevant parent construct, reinforcing the
distinction between different constructs (Hair et al., 2022). In this study, the achieved factor
loadings align with both criteria, underscoring the convergent validity and demonstrating a clear
delineation between constructs, as illustrated in the pattern matrix in table 3 presented below.

Table 3: Pattern Matrix

Component 1 2 3 4 5 6
CS1 0.638
CS2 0.840
CS3 0.836
CS4 0.862
CS5 0.947
ES1 0.894
ES2 0.932
ES3 0.819
ER1 0.962
ER2 0.794
ER3 0.735
ER4 0.766
PSO1 0.915
PSO2 0.536
EWOM1 0.850
EWOM2 0.902
EWOM3 0.798
BE1 0.562
BE2 0.825
BE3 0.886
BE4 0.633

You might also like