Partial least squares path modeling (PLS) was developed in the 1960s and 1970s as a method for predictive modeling. In the succeeding years, applied disciplines, including organizational and management research, have developed beliefs about the capabilities of PLS and its suitability for different applications. On close examination, some of these beliefs prove to be unfounded and to bear little correspondence to the actual capabilities of PLS. In this article, we critically examine several of these commonly held beliefs. We describe their origins, and, using simple examples, we demonstrate that many of these beliefs are not true. We conclude that the method is widely misunderstood, and our results cast strong doubts on its effectiveness for building and testing theory in organizational research.
Discriminant validity was originally presented as a set of empirical criteria that can be assessed from multitrait-multimethod (MTMM) matrices. Because datasets used by applied researchers rarely lend themselves to MTMM analysis, the need to assess discriminant validity in empirical research has led to the introduction of numerous techniques, some of which have been introduced in an ad hoc manner and without rigorous methodological support. We review various definitions of and techniques for assessing discriminant validity and provide a generalized definition of discriminant validity based on the correlation between two measures after measurement error has been considered. We then review techniques that have been proposed for discriminant validity assessment, demonstrating some problems and equivalencies of these techniques that have gone unnoticed by prior research. After conducting Monte Carlo simulations that compare the techniques, we present techniques called CICFA(sys) and [Formula: see text](sys) that applied researchers can use to assess discriminant validity.
Statistical and methodological myths and urban legends a b s t r a c t Partial least squares (PLS) path modeling is increasingly being promoted as a technique of choice for various analysis scenarios, despite the serious shortcomings of the method. The current lack of methodological justification for PLS prompted the editors of this journal to declare that research using this technique is likely to be deck-rejected (Guide and Ketokivi, 2015). To provide clarification on the inappropriateness of PLS for applied research, we provide a non-technical review and empirical demonstration of its inherent, intractable problems. We show that although the PLS technique is promoted as a structural equation modeling (SEM) technique, it is simply regression with scale scores and thus has very limited capabilities to handle the wide array of problems for which applied researchers use SEM. To that end, we explain why the use of PLS weights and many rules of thumb that are commonly employed with PLS are unjustifiable, followed by addressing why the touted advantages of the method are simply untenable.
Entities such as individuals, teams, or organizations can vary systematically from one another. Researchers typically model such data using multilevel models, assuming that the random effects are uncorrelated with the regressors. Violating this testable assumption, which is often ignored, creates an endogeneity problem thus preventing causal interpretations. Focusing on two-level models, we explain how researchers can avoid this problem by including cluster means of the Level 1 explanatory variables as controls; we explain this point conceptually and with a large-scale simulation. We further show why the common practice of centering the predictor variables is mostly unnecessary. Moreover, to examine the state of the science, we reviewed 204 randomly drawn articles from macro and micro organizational science and applied psychology journals, finding that only 106 articles—with a slightly higher proportion from macro-oriented fields—properly deal with the random effects assumption. Alarmingly, most models also failed on the usual exogeneity requirement of the regressors, leaving only 25 mostly macro-level articles that potentially reported trustworthy multilevel estimates. We offer a set of practical recommendations for researchers to model multilevel data appropriately.
Confidence intervals (CIs) are an alternative to null hypothesis significance testing (NHST) (Nickerson 2000; Wood 2005); both techniques essentially convey information about how certain we are of an estimate. A CI consists of two values, upper and lower confidence limits, associated with a confidence level, typically 95%. Valid CIs should satisfy two important conditions (DiCiccio and Efron 1996). First, a CI should contain the population value of the parameter under estimation with the stated degree of confidence over a large number of repeated samples. For example, a 95% CI should contain the population value of a parameter 95% of the time, with the true value of the parameter falling outside of the interval only in 5% of the cases. Second, in those cases where the true value of the parameter falls beyond the boundaries of the interval, it should do so in a balanced way. Using again the 95% CI as an example, the population value should be higher than the upper boundary in 2.5% of the samples and lower than the lower boundary of the interval 2.5% of the time. We illustrate these two properties of CIs in Figure A1. The figure shows 250 correlation estimates drawn from three different populations (where the true value of the correlation is 0, 0.3, or 0.6, respectively), ordered from smallest to largest, and their 95% CIs. In this scenario, the population value falls outside the CI only about 5% of the time and does so in a balanced way, such that the population value lies above the CI 2.5% of the time and below the CI 2.5% of the time. The figure also shows that both the variance of the estimates and the width of the CIs depend on the population value of the correlation; when the population correlation is zero, the difference between the largest and smallest estimate is close to 0.5, but when the population value is 0.6 this difference decreases to about 0.35. Similarly, the CIs are narrower for larger estimates. This is an important feature of CIs that unfortunately complicates their calculation, as we discuss later. The CI of a correlation has a valid closed form solution, but estimating CIs for more complex scenarios is a non-trivial problem. The most straightforward way to estimate CIs is to use a known theoretical distribution. We refer to these as parametric approaches. When the distribution of the estimates is not known, as is the case with those obtained from PLSc, CIs based on bootstrapping provide an attractive alternative (Wood 2005). In these approaches, which we refer to as empirical, the endpoints of the CIs are not taken from a known statistical distribution, but rather the values are obtained from the empirically approximated bootstrap distribution. Bootstrapping means that we draw a large number of samples from our original data and calculate the statistic for each sample. The samples are drawn with replacement, which means that each observation in the original sample can be included in each bootstrap sample multiple times. While bootstrapping can be useful when working with statistics whose samp...
Partial least squares path modeling (PLS) has been increasing in popularity as a form of or an alternative to structural equation modeling (SEM) and has currently considerable momentum in some management disciplines. Despite recent criticism toward the method, most existing studies analyzing the performance of PLS have reached positive conclusions. This article shows that most of the evidence for the usefulness of the method has been a misinterpretation. The analysis presented shows that PLS amplifies the effects of chance correlations in a unique way and this effect explains prior simulations results better than the previous interpretations. It is unlikely that a researcher would willingly amplify error, and therefore the results show that the usefulness of the PLS method is a fallacy. There are much better ways to compensate for the attenuation effect caused by using latent variable scores to estimate SEM models than creating a bias into the opposite direction.
The partial least squares technique (PLS) has been touted as a viable alternative to latent variable structural equation modeling (SEM) for evaluating theoretical models in the differential psychology domain. We bring some balance to the discussion by reviewing the broader methodological literature to highlight: (1) the misleading characterization of PLS as an SEM method; (2) limitations of PLS for global model testing; (3) problems in testing the significance of path coefficients; (4) extremely high false positive rates when using empirical confidence intervals in conjunction with a new "sign change correction" for path coefficients; (5) misconceptions surrounding the supposedly superior ability of PLS to handle small sample sizes and non-normality; and (6) conceptual and statistical problems with formative measurement and the application of PLS to such models. Additionally, we also reanalyze the dataset provided by
PurposeThe study seeks to add to the existing body of knowledge on the link between strategic planning and company performance by exploring the mediating role of personnel commitment to strategy implementation and organisational learning. To study the indirect link between strategic planning and company performance, the paper aims to introduce a participative strategic planning construct that may enable firms to: commit personnel to strategy implementation; increase organisational learning; and improve company performance.Design/methodology/approachUsing data from 160 small and medium‐sized Finnish IT companies, the authors conduct an Mplus‐analysis.FindingsThe findings indicate that participative strategic planning positively affects personnel commitment to strategy implementation, which thereby increases company performance. However, according to the analysis, participative strategic planning does not impact organisational learning, although organisational learning does have a positive impact on company performance.Research limitations/implicationsThe results of this study are generalisable to a dynamic industry context of small and medium‐sized IT‐firms operating in a small open economy, such as that of Finland.Practical implicationsThe results suggest that managers need to involve personnel in strategic planning to increase personnel commitment to strategy implementation. However, because participative strategic planning does not facilitate organisational learning, managers need to determine other ways to facilitate learning at an organisational level.Originality/valueThe paper highlights the role of participative strategic planning, which facilitates personnel commitment to strategy implementation and thus improves company performance.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
hi@scite.ai
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.