Download as pdf or txt
Download as pdf or txt
You are on page 1of 12

Thiele et al.

Comparative evaluation of SEM methods

Mirror, mirror on the wall. A comparative evaluation of new and


established structural equation modeling methods.
Completed Research Paper

Kai-Oliver Thiele Marko Sarstedt


Hamburg University of Technology, Otto-von-Guericke-University Magdeburg,
Germany Germany
k.thiele@tuhh.de marko.sarstedt@ovgu.de
Cristian M. Ringle
Hamburg University of Technology,
Germany
c.ringle@tuhh.de

Abstract

Structural equation modeling (SEM) has become a quasi-standard in research when it comes
to analyzing the cause-effect relationships between latent variables. Recent research has
brought forward a variety of different methods for estimating structural equation models,
which have not been researched in-depth. We extend prior research by (1) examining a broad
range SEM methods such as covariance-based SEM, partial least squares (PLS), extended
PLS, consistent PLS, generalized structured component analysis, and sumscores, (2)
analyzing null relationships in the structural model, (3) considering measurement model
results, and (4) reporting additional performance measures that allow a nuanced assessment of
the results.

Keywords: structural equation modeling (SEM), partial least squares (PLS), consistent PLS
(PLSc), generalized structured component analysis (GSCA), linear structural relations
(LISREL), sumscores, simulation study.

1. Introduction

Structural equation modeling (SEM) has become a quasi-standard in behavioral science


research in respect of analyzing the cause-effect relationships between latent variables (e.g.,
Bollen, 1989; Hair, Hult, Ringle, and Sarstedt, 2014; Rigdon, 1998). SEM allows researchers
to simultaneously analyze relationships between latent variables, measured by sets of
indicator variables, while accounting for measurement error. Whereas covariance-based SEM
(CBSEM; Jöreskog, 1973) has been the standard method for estimating structural equation
models, variance-based partial least squares (PLS; Wold, 1982) has gained increasing
prominence in various disciplines such as marketing (e.g., Henseler, Ringle, and Sinkovics
2009; Hair, Sarstedt, Ringle, and Mena, 2012c), management information systems research
(Ringle, Satstedt, and Straub, 2012), and in strategic management (Hair et al. 2012a, Hair et al.
2012b, Hair et al. 2013). Despite the popularity of CBSEM and PLS, ongoing research
continues to compare the two methods’ performance across different data and model
constellations (e.g., Fornell and Bookstein, 1982; Hwang, Malhotra, Kim, Tomiuk, and Hong,

2nd International Symposium on Partial Least Squares Path Modeling, Seville (Spain), 2015 1
Thiele et al. Comparative evaluation of SEM methods

2010; Henseler et al., 2014). At the same time, in an effort to combine the strengths of
CBSEM and the PLS, research has explored alternative methods for estimating structural
equation models (e.g., Dijkstra and Henseler, 2015b; Hwang and Takane, 2004). However,
despite this diversity of literature on CBSEM, PLS, and related methods, there has been no
comprehensive simulation study to date that compares all these SEM methods’ performance
across a broad range of model constellations that researchers typically encounter in practice.
Furthermore, the few studies that do compare PLS and/or CBSEM’s performance with
alternative approaches to SEM, focus univocally on structural model comparisons, instead of
also considering the measurement models. Similarly, the vast majority of comparisons focus
on parameter estimation bias, neglecting the methods’ sensitivity to Type I and Type II errors,
which are fundamental characteristics of every statistical inference method.
Against this background, this study is the first to offer a comprehensive comparison of a
broad range of SEM methods. We do not only focus on well-documented CBSEM (Jöreskog,
1978) and PLS (Wold, 1982) methods, which prior studies do (e.g., Dolce and Lauro, 2015;
Fornell and Bookstein, 1982), but consider a much broader range of SEM techniques, several
of which have not been analyzed in-depth. More specifically, we examine the performance of
extended PLS (PLSe), which Lohmöller (1979, 1989) recommended to overcome the original
PLS method’s restrictions in terms of model specifications. For example, PLSe allows for
imposing restrictions on model parameters and for assigning indicators to multiple constructs.
Researchers have largely overlooked PLSe and its performance has not yet been examined
thus far. Similarly, we consider generalized structured component analysis (GSCA; Henseler,
2012; Hwang et al., 2010; Hwang and Takane, 2004), which, since its introduction a decade
ago, has received comparably little attention in behavioral science research. Furthermore, we
analyze the performance of consistent PLS (PLSc; Dijkstra, 2014; Dijkstra and Henseler,
2015a, b), which has been developed to overcome a limitation of the basic PLS algorithm by
providing consistent model estimates by disattenuating the correlations between pairs of latent
variables. Finally, since recent research argues in favor of regressions based on sumscores,
instead of some type of indicator weighting (e.g., Goodhue, Lewis, and Thompson, 2012;
Rönkkö and Evermann, 2013), we also consider this estimation technique in our analyses.
Table 1, which is an extension of Hwang et al.’s (2010) illustration, provides a summary of
the similarities and dissimilarities of the six approaches to SEM.

2nd International Symposium on Partial Least Squares Path Modeling, Seville (Spain), 2015 2
Thiele et al. Comparative evaluation of SEM methods

Table 1: Similarities and dissimilarities of the six approaches to SEM.


CBSEM* PLS PLSe** PLSc*** GSCA**** Sumscores
Model specification
Latent variables Factors Components Components Factors Components Components

Number of equations One Two Two Two One Two

Model parameters Loadings, path Loadings, Loadings, Loadings, path Loadings, Path coefficients
coefficients, error component weights, component weights, coefficients component weights,
variances path coefficients path coefficients path coefficients
Parameter estimation
Input data Covariances/ Raw data Covariances/ Raw data Raw data Raw data
correlations correlations

Estimation method Maximum Least squares Least squares Least squares Least squares Least squares
likelihood (mainly)

Global optimization function Yes No No No Yes No

Normality assumption Required for Not required Not required Not required Not required Not required
maximum
likelihood

Model fit measures Overall and local Local Local Overall and local Local Local
Notes: * covariance-based structural equation modeling; ** extended PLS; ***
consistent PLS; ****
generalized structured component analysis.

2nd International Symposium on Partial Least Squares Path Modeling, Seville (Spain), 2015 3
Thiele et al. comparative evaluation of SEM methods

Further, we extend prior research by considering additional criteria to more


comprehensively analyze the results in light of the most recent discussions on SEM in general
and variance-based SEM in specific (e.g., Henseler et al., 2014). Specifically, instead of
focusing on the structural model only, we also consider measurement model results. We
analyze the simulation study results regarding the rate of convergence and proper solutions,
the mean absolute error (MAE), and sensitivity to Type II errors (i.e., statistical power). In
addition, we take the mean error (ME) into account, which reveals a method’s tendency to
over- or underestimate path coefficients. Thereby, we address considerations regarding
variance-based SEM methods’ systematic over and underestimation of results (Hui and Wold,
1982) and offer additional insights into the methods’ behavior. Moreover, we include null
paths, which allows us to test the various methods’ efficacy in terms of detecting zero
relationships. By analyzing each method’s sensitivity to Type I errors (i.e., false positives),
we take the criticism of Reinartz, Haenlein and Henseler’s (2009) study into account
(Goodhue et al., 2012) and offer a more nuanced analysis of the results. Based on our findings,
we offer recommendations on the methods’ use.

2. Simulation design and model estimation

Our simulation design draws on related research by Becker, Rai, Ringle and Völckner
(2013), Chin, Marcolin and Newsted (2003), Henseler (2012), Hwang et al. (2010), and
Reinartz et al. (2009). Specifically, we consider the basic path model shown in Figure 1 with
low (i.e., .15; γ2, γ3, β6), medium (i.e., .03; β3), and high (i.e., .05; γ1, β1, β2, β4, β5) pre-
specified path coefficients. Analogous to Reinartz et al. (2009), we manipulate the following
factors and factor levels:
• The number of indicators per latent variable [2, 4, 6, 8],
• The indicator loadings [low equal: 𝜆! = .50, medium equal: 𝜆! = .70, high equal:
𝜆! = .90, unequal: 𝜆! =. 50, 𝜆! = .90],
• The skewness/kurtosis [none: 0/0, moderate: 1/6, high: 2/12.8], and
• The sample size [100, 250, 500, 1,000, 10,000].

The resulting design is factorial and we conduct 300 replications (data sets) for each
factor-level combination to obtain stable average outcomes for our analysis (i.e., the analysis
includes 4·4·3·5·300 = 72,000 datasets and a total of 432,000 computations for the six SEM
methods under research). However, to provide an objective picture of the results, we exclude
all factor-level combinations where the composite reliability of the construct measures falls
below the critical value of .60 (Nunally, 1978), thereby yielding results with only limited use
in practical applications. Hence, all factor-level constellations with 2 or 4 indicators per latent
variable, each having a low loading of 𝜆! = .50 have been excluded, which reduces the
factorial design to 72,000 – 2·(1·1·3·5·300) = 63,000 datasets and thus 378,000 computations
for the six SEM methods under research.

2nd International Symposium on Partial Least Squares Path Modeling, Seville (Spain), 2015 4
Thiele et al. comparative evaluation of SEM methods

𝜁,
𝜉" 𝛾* = .15 𝜂,
𝜁* 𝛽, = .50
𝛾) = .15
𝛽* = .30
𝛾" = .50 𝜂) 𝜂* 𝛽0 = .15
𝛽" = .50 𝛽+ = .50
𝜁) 𝛾, = .0
𝛽) = .50
𝜂" 𝜉) 𝜂+
𝛾+ = .0 𝛾0 = .50

𝜁+
𝜁"

Figure 1: Simulation Model.

The data generation was performed by means of the Mattson (1997) method. We used the
statistical software R and its parallel computing snowfall package (Knaus, 2013) for all the
computations. The CBSEM computations were conducted by means of R software’s sem
package (Fox et al., 2013), while the PLS computations were carried out with the semPLS
package (Monecke and Leisch, 2012); we conducted the PLSe, PLSc, GSCA, and sumscores
computations with our own R codes.

3. Simulation results

In terms of convergence, our results show that all the methods usually converge. The only
exception is CBSEM, which fails to converge and exhibits other improper solutions (i.e.,
Heywood Cases) in some few factor-level combinations with relatively low construct
reliability and/or with a small number of observations. This finding is important, as our study
substantiates all the methods’ capability to provide generally reasonable results in various
data situations.
With regard to the parameter accuracy in terms of the MAE in the measurement models,
we find that CBSEM shows the best performance, followed by PLSc and the remaining
variance-based SEM methods PLS, PLSe, and GSCA. As expected, sumscores perform
similar to the other approaches when loadings are equal. However, when the loading patterns
are heterogonous—which is quite common in practical applications—sumscores’
performance drops significantly in comparison with alternative methods, and unmistakably
displays the worst parameter accuracy. The relative advantage of CBSEM and PLSc declines
with larger numbers of indicators per measurement model. In fact, with eight indicators and
only 100 observations, PLSc shows the worst performance.
The results for the parameter accuracy in terms of the MAE in the structural model show
that PLS, PLSe, GSCA, and sumscores have a higher parameter accuracy than CBSEM and
PLSc in situations with small sample sizes, particularly when the measurement models have
only few indicators or include low loadings (Figure 2). When the sample size increases,
CBSEM and PLSc’s performance improves considerably. In situations with 500 and more
observations, CBSEM and PLSc usually outperform the other methods. In many situations,
sumscores show the lowest performance in the structural model, especially when the loading
patterns of the underlying latent variables are heterogeneous.

2nd International Symposium on Partial Least Squares Path Modeling, Seville (Spain), 2015 5
Thiele et al. comparative evaluation of SEM methods

Notes: The figure shows the MAE (i.e. Mean Absolute Error) in the structural model (SM) given different
design factors for all six SEM methods. For example, in respect of 100 observations and two indicators per
latent variable, PLS, on average, over or underestimates the path coefficient by roughly .10.
Figure 2: Mean absolute relative error (MAE) in the structural model (SM).

In order to gauge each design factor’s impact on the parameter accuracy, we ran ANOVAs
with Log10(MAE) as the dependent variable. The results depicted in Figure 3 are clustered
into groups of corresponding effect sizes operationalized as partial η2, starting from
essentially negligible effects (partial η2 < .10) to substantial effects (partial η2 > .70). None of
the methods exhibit significant three- or four-way interactions, while weak and moderate two-
way interactions are present between sample size × indicator and sample size × indicator
loading for some methods. The main effects clearly dominate in the explanation of variance
for all methods. PLS, PLSe, GSCA and sumscores exhibit factor-related main effects at
similar levels with a strong dependency on the underlying loading, followed by the number of

2nd International Symposium on Partial Least Squares Path Modeling, Seville (Spain), 2015 6
Thiele et al. comparative evaluation of SEM methods

indicators and number of observations. Compared to the aforementioned methods, PLSc and
CBSEM yield a different results pattern. They are mostly affected by the number of
observations, which corresponds to prior assumptions, especially regarding the use of
CBSEM (Boomsma and Hoogland, 2001). Moreover, the analysis reveals that the non-
normality of the data partially affects the parameter accuracy in the measurement model for
CBSEM. This result is noteworthy, since researchers often assume non-normal data has a
uniformly negative effect on the quality of CBSEM results (e.g., Hair et al., 2011).

PLS PLSc PLSe GSCA Sumscores CBSEM


[MM/SM] [MM/SM] [MM/SM] [MM/SM] [MM/SM] [MM/SM]

Loadings

# of Indicators

Distributions

log10(Observations)

# of Indicators ×
Loadings
# of Indicators ×
Distributions
# of Indicators ×
log10(Observations)
Distributions ×
Loadings
log10(Observations)
× Loadings
log10(Observations)
× Distributions
Notes: The figure shows the results of ANOVAs using log10(MAE) as dependent variable across all six
SEM methods. The classification starts from no effect (○, partial η² < .10) to substantial effect (●, partial
η² > .70); the first column represents the result of the measurement models (MM), whereas the second
column is the result of the structural model (SM).
Figure 3: ANOVA of the log10 (MAE) in the measurement and structural models.

When looking at the ME of the estimated loadings in the measurement models, we find
that CBSEM and PLSc, with almost no bias, perform best. This means that, on average, both
methods find the almost exact pre-specified loadings. In contrast, PLS, PLSe, GSCA, and
sumscores overestimate the loadings. The overestimation tendencies of these methods decline
with a larger number of indicators, larger sample sizes, and higher loading levels. Sumscores
show the highest bias in almost all situations.
In contrast, analyzing the ME of the structural model estimates shows an underestimation
in respect of PLS, PLSc, GSCA, and sumscores, while CBSEM and PLSc show almost no
bias (Figure 4). Only in situations with small sample size and low loadings, PLSc
overestimates the expected coefficients. In these extreme situations, PLSc’s disattenuation of
construct correlations appears too radical. Sumscores’ measurement model bias has
consequences for the structural model results. In almost all situations, sumscores exhibit the
highest bias, especially in situations where the measurement models’ loadings are
heterogeneous.

2nd International Symposium on Partial Least Squares Path Modeling, Seville (Spain), 2015 7
Thiele et al. comparative evaluation of SEM methods

Notes: The figure shows the ME (i.e. Mean Error) in the structural model (SM) given different design
factors for all six SEM methods. For example, in respect of 100 observations and two indicators per latent
variable, PLS, on average, underestimates the path coefficient by roughly .05.
Figure 4: Mean error (ME) in the structural model (SM).

The next analyses address the question of how well the methods render relationships with low
(.15), medium (.30), and high (.50) effect sizes significant (i.e., statistical power). The results
reveal that PLS and PLSe perform best in terms of statistical power, followed by GSCA and
sumscores, while CBSEM and PLSc clearly perform the worst. PLS and PLSe’s relative
advantages are particularly pronounced in situations with low to moderate loadings, few
indicators per measurement model, and low effect sizes. For example, in the situation with a
low effect size (.15), six indicators per measurement model, low outer loadings (.50), and 250
observations, CBSEM and PLSc render the relationships significant with a probability of

2nd International Symposium on Partial Least Squares Path Modeling, Seville (Spain), 2015 8
Thiele et al. comparative evaluation of SEM methods

merely 27% and 11%. In contrast, in this factor level constellation, PLS and PLSe achieve
power levels of 84%. In general, PLSc performs very similar to CBSEM, except when the
loadings are moderate or low, which is where the method clearly lags behind its covariance-
based counterpart.
Finally, flaws with the interpretation of results and false conclusions can also occur if the
methods indicate relationships as significant even though this is not the case in the population
(i.e., Type I error). We find that all the methods have very low Type I error rates of less than
10%. The only exception is PLSe in settings with low loadings and low sample sizes, where
Type I error rates increase to up to 41%.

4. Discussion and conclusion

Our research results suggest that there are two groups of SEM methods, which differ
noticeably in their behavior: CBSEM and PLSc, on the one hand, and PLS, PLSe, GSCA, and
sumscores on the other. CBSEM and PLSc clearly outperform the other methods in terms of
measurement model parameter accuracy when sample sizes are high. With lower sample
sizes, however, these advantages diminish. In the structural model, CBSEM and PLSc exhibit
less parameter accuracy than PLS, PLSe, GSCA, and sumscores in all small sample size
situations (i.e., 100 observations). With regard to the origins of the parameter inaccuracy,
certain key factors, such as low loadings, the sample size, and the number of indicators per
measurement model, influence CBSEM and PLSc more strongly than the other methods. In
addition, the data distribution has an effect on CBSEM results, which is not relevant for the
other methods. Analyzing the direction of estimation bias shows that PLS, PLSe, GSCA and
sumscores overestimate the parameters in the measurement model and underestimate them in
the structural model. This kind of estimation bias diminishes with a larger number of
indicators per measurement model and larger numbers of observations. Hence, PLS, PLSe,
GSCA, and sumscores are all subject to the well-known consistency at large behavior (Hui
and Wold, 1982). In contrast, CBSEM and PLSc show very little under- or overestimation
tendencies, except when measurement models are parsimonious (e.g., two indicators per
measurement model) and sample sizes are small (e.g., 100 observations). Finally, CBSEM
and PLSc clearly lag behind the other methods in terms of statistical power. This
characteristic translates into CBSEM and PLS having a slight advantage regarding avoiding
false positives (Type I error), especially in situations with small sample sizes, few indicators,
and low loadings.
Within the first group of methods, PLSc’s performance is very similar to that of CBSEM.
This result is fundamental and extremely noteworthy given the intensive debate on the
differences between CBSEM and PLS (e.g., Barroso, Carrión, and Roldán, 2010; Dolce and
Lauro, 2015; Henseler, 2012; Hwang et al., 2010). PLSc always converges and, thus, is
slightly more advantageous than CBSEM. In contrast, (some) low loadings in the
measurement models below the usual cut-off criteria (e.g., loadings of .50), affect the PLSc
estimates negatively and, in such situations, provide inferior outcomes compared to CBSEM.
In all other situations, PLSc and CBSEM yield highly similar results. Hence, researchers
should consider PLSc as a valuable alternative to CBSEM (e.g., Bentler and Huang, 2014;
Dijkstra 2014; Henseler and Dijkstra, 2015b). Since the PLSc method offers several
advantages, such as the capability to estimate complex models with limited sample sizes and
non-normal data, few indicators, and formatively measured constructs (Hair et al., 2012c), it
proves particularly valuable in situations in which CBSEM reaches its limit due to its more
restrictive assumptions. However, as research has not yet brought forward goodness-of-fit
measures that would allow for disclosing model misspecifications within a PLSc framework,

2nd International Symposium on Partial Least Squares Path Modeling, Seville (Spain), 2015 9
Thiele et al. comparative evaluation of SEM methods

PLSc’s use for theory testing is somewhat limited. In line with recent research on PLS
(Henseler et al., 2014), the SRMR criterion seems a promising option to identify model
misspecification when using PLSc (Dijkstra and Henseler, 2015a).
In the other group of methods, PLS and PLSe perform slightly better than GSCA, which
outperforms sumscores. Our results suggest that PLSe offers a viable alternative to PLS.
While the differences in the methods’ performance in terms of parameter accuracy are often
marginal, PLSe offers much more flexibility in terms of model specification (e.g., relationship
of an indicator with multiples latent variables and the orthogonalization of constructs in the
structural model, which entails a constrained null relationship between them). This makes
PLSe particularly suitable for research settings that have traditionally been viewed as
favorable for the application of PLS (i.e., complex models, small sample sizes, and
formatively measured constructs). However, PLSe differs from the other methods in that it
has the greatest statistical power. At the same time, this characteristic increases the
probability that PLSe can estimate false positives (Type I error) in certain constellations.
Finally, some researchers propagate the use of sumscores (e.g., Goodhue et al., 2012; Rönkkö
and Evermann, 2013). We find that sumscores’ performance lags behind that of all the other
methods, especially in situations with heterogeneous loading patterns. Consequently, we
dismiss sumscores as a useful method, since there is no “more for less” in SEM (i.e., using a
simpler method to obtain better results).
Our findings help applied researchers selecting an appropriate SEM method for their
particular research question, dependent on the specific model and data situation they face. We
also identify two groups of methods, which will both play a role in future research. For
confirmatory SEM based on covariances, constructs, and common factors, CBSEM and PLSc
are the methods of choice (Bentler and Huang, 2014; Dijkstra, 2014). In contrast, PLS, PLSe,
and GSCA represent variance-based SEM methods, which are advantageous for predictive
modeling (Rigdon, 2012, 2014; Sarstedt, Ringle, Henseler and Hair, 2014). That is, depending
on the research goal—confirmatory or predictive modeling—researcher must decide between
using CBSEM and PLSc, or the variance-based SEM methods (i.e., PLS, PLSe, GSCA). Once
researchers know which family of SEM methods to use, our study offers further insights
regarding the selection of an appropriate method in a particular model and data situation.
Based on our results, we call for a multi-method approach to SEM. Researchers should
first select an appropriate method for their research goal (i.e., confirmatory or predictive
modeling). Next applying an alternative method within the same family of methods allows
them to cross-validate their findings. The outcomes should be very similar. Otherwise, certain
issues regarding the model and/or the data may negatively affect the results, which the multi-
method SEM approach will reveal.
Finally, our study also offers promising avenues for further research. Subsequent studies
should extend the simulation design by analyzing more complex modeling elements, such as
mediators, hierarchical component models, interaction terms, and nonlinear effects. While the
use of these modeling elements has become far more popular recently, little is known about
the different SEM methods’ efficacy regarding estimating such model structures. In terms of
measurement models, future research should also consider the methods’ capabilities to
estimate models with formatively measured constructs. While this kind of measurement
model is well established, prior research has focused on CBSEM’s capabilities regarding
estimating such models only.

5. References

Barroso, C., Carrión, G., & Roldán, J. (2010). Applying maximum likelihood and PLS on

2nd International Symposium on Partial Least Squares Path Modeling, Seville (Spain), 2015 10
Thiele et al. comparative evaluation of SEM methods

different sample sizes: Studies on SERVQUAL model and employee behavior model. In
V. Esposito Vinzi, W. W. Chin, J. Henseler, & H. Wang (Eds.), Handbook of partial
least squares (pp. 427-447). Berlin: Springer.
Becker, J.-M., Rai, A., Ringle, C. M., & Völckner, F. (2013). Discovering unobserved
heterogeneity in structural equation models to avert validity threats. MIS Quarterly,
37(3), 665-694.
Bentler, P. M., & Huang, W. (2014). On components, latent variables, PLS and simple
methods: Reactions to Ridgon’s Rethinking of PLS. Long Range Planning, 47(3), 138-
145.
Bollen, K. A. (1989). Structural equations with latent variables. New York: Wiley.
Boomsma, A., & Hoogland, J. J. (2001). The robustness of LISREL modeling revisited. In R.
Cudeck, S. du Toit & D. Sörbom (Eds.), Structural equation modeling: Present and
future (pp. 139-168). Chicago: Scientific Software International.
Chin, W. W., Marcolin, B. L., & Newsted, P. R. (2003). A partial least squares latent variable
modeling approach for measuring interaction effects: Results from a Monte Carlo
simulation study and an electronic-mail emotion/adoption study. Information Systems
Research, 14(2), 189-217.
Dijkstra, T. K. (2014). PLS' Janus face – Response to Professor Rigdon's ‘Rethinking partial
least squares modeling: In praise of simple methods’. Long Range Planning, 47(3), 146-
153.
Dijkstra, T. K., & Henseler, J. (2015a). Consistent and asymptotically normal PLS estimators
for linear structural equations. Computational Statistics & Data Analysis, 81(1), 10-23.
Dijkstra, T. K., & Henseler, J. (2015b). Consistent partial least squared path modeling. MIS
Quarterly, forthcoming.
Dolce, P., & Lauro, N. (2015). Comparing maximum likelihood and PLS estimates for
structural equation modeling with formative blocks. Quality & Quantity, forthcoming.
Fornell, C. G., & Bookstein, F. L. (1982). Two structural equation models: LISREL and PLS
applied to consumer exit-voice theory. Journal of Marketing Research, 19(4), 440-452.
Fox, J., Nie, Z., Byrnes , J., Culbertson, M., DebRoy, S., Friendly, M., Jones, R. H., Kramer,
A., & Monette, G. (2013). R Package sem: Structural equation models (version 3.1-3).
Retrieved from http://cran.r-project.org/web/packages/sem/.
Goodhue, D. L., Lewis, W., & Thompson, R. (2012). Does PLS have advantages for small
sample size or non-normal data? MIS Quarterly, 36(3), 891-1001.
Hair, J. F., Ringle, C. M., & Sarstedt, M. (2011). PLS-SEM: Indeed a silver bullet. Journal of
Marketing Theory and Practice, 19(2), 139-151.
Hair, J. F., Ringle, C. M., & Sarstedt, M. (2012a). Partial least squares: The better approach to
structural equation modeling? Long Range Planning, 45(5-6), 312-319.
Hair, J. F., Sarstedt, M., Pieper, T. M., & Ringle, C. M. (2012b). The use of partial least
squares structural equation modeling in Strategic Management research: A review of Past
practices and recommendations for future applications. Long Range Planning, 45(5-6),
320-340.
Hair, J. F., Sarstedt, M., Ringle, C. M., & Mena, J. A. (2012c). An assessment of the use of
partial least squares structural equation modeling in marketing research. Journal of the
Academy of Marketing Science, 40(3), 414-433.Henseler, J. (2012). Why generalized
structured component analysis is not universally preferable to structural equation
modeling. Journal of the Academy of Marketing Science, 40(3), 402-413.
Hair, J. F., Ringle, C. M., & Sarstedt, M. (2013). Partial least squares structural equation
modeling: Rigorous applications, better results and higher acceptance. Long Range
Planning, 46(1-2), 1-12.
Hair, J. F., Hult, G. T. M., Ringle, C. M., & Sarstedt, M. (2014). A primer on partial least

2nd International Symposium on Partial Least Squares Path Modeling, Seville (Spain), 2015 11
Thiele et al. comparative evaluation of SEM methods

squares structural equation modeling (PLS-SEM). Thousand Oaks, CA: Sage.


Henseler, J., Ringle, C. M., & Sinkovics, R. (2009). The use of partial least sqaures path
modeling in international marketing. Advances in International Marketing, 20, 277-319.
Henseler, J., Dijkstra, T. K., Sarstedt, M., Ringle, C. M., Diamantopoulos, A., Straub, D. W.,
Ketchen, D. J., Hair, J. F., Hult, G. T. M., & Calantone, R. J. (2014). Common beliefs
and reality about partial least squares: Comments on Rönkkö & Evermann (2013).
Organizational Research Methods, 17(2), 182-209.
Hui, B. S., & Wold, H. O. A. (1982). "Consistency and Consistency at Large of Partial Least
Squares Estimates." In Systems Under Indirect Observation, Part II. Eds. K. G. Jöreskog
and H. O. A. Wold. Amsterdam: North Holland, 119-130.
Hwang, H., Malhotra, N. K., Kim, Y., Tomiuk, M. A., & Hong, S. (2010). A comparative
study on parameter recovery of three approaches to structural equation modeling. Journal
of Marketing Research, 47(4), 699-712.
Hwang, H., & Takane, Y. (2004). Generalized structured component analysis. Psychometrika,
69(1), 81-99.
Jöreskog, K. G. (1973). A general method for estimating a linear structural equation system.
In S. Goldberger & O. D. Duncan (Eds.), Structural equation models in the social
sciences (p. 255-284). New York: Seminar Press.
Jöreskog, K. G. (1978). Structural analysis of covariance and correlation matrices.
Psychometrika, 43(4), 443-477.
Knaus, J. (2013). R package snowfall: Easier cluster computing (version: 1.84-6). Retrieved
from: cran.r-project.org/web/packages/snowfall/.
Lohmöller, J.-B. (1979). Estimating parameters of linear structural relation models under
partial least-squares criteria. Forschungsbericht 79.01: Hochschule der Bundeswehr,
Munich, Germany.
Lohmöller, J.-B. (1989). Latent variable path modeling with partial least squares.
Heidelberg: Physica.
Mattson, S. (1997). How to generate non-normal data for simulation of structural equation
models. Multivariate Behavioral Research, 32(4), 355-373.
Monecke, A., & Leisch, F. (2012). semPLS: Structural equation modeling using partial least
squares. Journal of Statistical Software, 48(3), 1-32.
Nunally, J. C. (1978). Psychometric theory. New York: McGraw Hill.
Reinartz, W. J., Haenlein, M., & Henseler, J. (2009). An empirical comparison of the efficacy
of covariance-based and variance-based SEM. International Journal of Research in
Marketing, 26(4), 332-344.
Rigdon, E. E. (1998). Structural equation modeling. In G. A. Marcoulides (ed.), Modern
Methods for Business Research (pp. 251-294). Mahwah: Erlbaum.
Rigdon, E. E. (2012). Rethinking partial least squares path modeling: In praise of simple
methods. Long Range Planning, 45(5-6), 341-358.
Rigdon, E. E. (2014). Rethinking partial least squares path modeling: Breaking chains and
forging ahead. Long Range Planning, 47(3), 161-167.
Ringle, C. M., Sarstedt, M., & Straub, D. W. (2012). A critical look at the use of PLS-SEM in
MIS Quarterly. MIS Quarterly, 36(1), iii-xiv.
Rönkkö, M., & Evermann, J. (2013). A critical examination of common beliefs about partial
least squares path modeling. Organizational Research Methods, 16(3), 425-448.
Sarstedt, M., Ringle, C. M., Henseler, J., & Hair, J. F. (2014). On the emancipation of PLS-
SEM: A commentary on Rigdon (2012). Long Range Planning, 47(3), 154-160.
Wold, H. (1982). Soft modeling: The basic design and some extensions. In . G. Jöreskog & H.
O. A. Wold (Eds.), Systems under indirect bbservations: Part II (pp. 1-54). Eds. K. G.
Amsterdam: North-Holland.

2nd International Symposium on Partial Least Squares Path Modeling, Seville (Spain), 2015 12

You might also like