Multiple linear regression (MLR) remains a mainstay analysis in organizational research, yet intercorrelations between predictors (multicollinearity) undermine the interpretation of MLR weights in terms of predictor contributions to the criterion. Alternative indices include validity coefficients, structure coefficients, product measures, relative weights, all-possible-subsets regression, dominance weights, and commonality coefficients. This article reviews these indices, and uniquely, it offers freely available software that (a) computes and compares all of these indices with one another, (b) computes associated bootstrapped confidence intervals, and (c) does so for any number of predictors so long as the correlation matrix is positive definite. Other available software is limited in all of these respects. We invite researchers to use this software to increase their insights when applying MLR to a data set. Avenues for future research and application are discussed. Keywords multiple regression, quantitative research, exploratory, research designA continued goal of organizational researchers conducting regression analysis is to make inferences about the relative importance of predictor variables (cf. Nimon, Gavrilova, & Roberts, 2010;Zientek, Capraro, & Capraro, 2008), yet it is all too common to rely heavily (if not solely) on the regression coefficients from the analysis which optimize sample-specific prediction (minimize sum of squared errors). Instead, other metrics that operationalize relative importance in ways that are consistent with such researchers' goals would seem more appropriate, and a range of metrics and approaches exists. In addition to regression weights and zero-order correlation coefficients that researchers likely report, MLR interpretation may be further informed by considering structure
While multicollinearity may increase the difficulty of interpreting multiple regression (MR) results, it should not cause undue problems for the knowledgeable researcher. In the current paper, we argue that rather than using one technique to investigate regression results, researchers should consider multiple indices to understand the contributions that predictors make not only to a regression model, but to each other as well. Some of the techniques to interpret MR effects include, but are not limited to, correlation coefficients, beta weights, structure coefficients, all possible subsets regression, commonality coefficients, dominance weights, and relative importance weights. This article will review a set of techniques to interpret MR effects, identify the elements of the data on which the methods focus, and identify statistical software to support such analyses.
The purpose of this article is to respond to the lack of consistency in the academic and practitioner literature regarding the construct of employee engagement and to offer a platform for the research and use of a refined construct called employee work passion. This article analyzes the differences between the concepts of engagement of the two groups of writers and proposes a new definition and framework based on social cognitive theory. Three recommendations are made for human resource development researchers and practitioners who seek to improve both the data and the strategies used in constructing engagement or work passion surveys. Engagement or passion surveys should (a) specifically and convincingly assess the affective components of the appraisal process, (b) differentiate descriptive cognitions and intentions, and (c) separate and corroborate intentions from behaviors.
Summary1. In the face of natural complexities and multicollinearity, model selection and predictions using multiple regression may be ambiguous and risky. Confounding effects of predictors often cloud researchers' assessment and interpretation of the single best 'magic model'. The shortcomings of stepwise regression have been extensively described in statistical literature, yet it is still widely used in ecological literature. Similarly, hierarchical regression which is thought to be an improvement of the stepwise procedure, fails to address multicollinearity. 2. We propose that regression commonality analysis (CA), a technique more commonly used in psychology and education research will be helpful in interpreting the typical multiple regression analyses conducted on ecological data. 3. CA decomposes the variance of R 2 into unique and common (or shared) variance (or effects) of predictors, and hence, it can significantly improve exploratory capabilities in studies where multiple regressions are widely used, particularly when predictors are correlated. CA can explicitly identify the magnitude and location of multicollinearity and suppression in a regression model. In this paper, using a simulated (from a correlation matrix) and an empirical dataset (human habitat selection, migration of Canadians across cities), we demonstrate how CA can be used with correlated predictors in multiple regression to improve our understanding and interpretation of data. We strongly encourage the use of CA in ecological research as a follow-on analysis from multiple regressions.
Direct gradient analyses in spatial genetics provide unique opportunities to describe the inherent complexity of genetic variation in wildlife species and are the object of many methodological developments. However, multicollinearity among explanatory variables is a systemic issue in multivariate regression analyses and is likely to cause serious difficulties in properly interpreting results of direct gradient analyses, with the risk of erroneous conclusions, misdirected research and inefficient or counterproductive conservation measures. Using simulated data sets along with linear and logistic regressions on distance matrices, we illustrate how commonality analysis (CA), a detailed variance-partitioning procedure that was recently introduced in the field of ecology, can be used to deal with nonindependence among spatial predictors. By decomposing model fit indices into unique and common (or shared) variance components, CA allows identifying the location and magnitude of multicollinearity, revealing spurious correlations and thus thoroughly improving the interpretation of multivariate regressions. Despite a few inherent limitations, especially in the case of resistance model optimization, this review highlights the great potential of CA to account for complex multicollinearity patterns in spatial genetics and identifies future applications and lines of research. We strongly urge spatial geneticists to systematically investigate commonalities when performing direct gradient analyses.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
hi@scite.ai
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.