M. N. (2012). Do harsh and positive parenting predict parent reports of deceitful-callous behavior in early childhood? Journal of Child Psychology and Psychiatry, 53(9), 946-953.
Given considerable racial differences in voluntary turnover (Bureau of Labor Statistics, 2006, Table 28), the present study examined the influence of diversity climate perceptions on turnover intentions among managerial employees in a national retail organization. The authors hypothesized that pro-diversity work climate perceptions would correlate most negatively with turnover intentions among Blacks, followed in order of strength by Hispanics and Whites (Hypothesis 1), and that organizational commitment would mediate these interactive effects of race and diversity climate perceptions on turnover intentions (Hypothesis 2). Results from a sample of 5,370 managers partially supported both hypotheses, as findings were strongest among Blacks. Contrary to the hypotheses, however, White men and women exhibited slightly stronger effects than Hispanic personnel.
Over the last 15 years, a number of methodological developments have enabled researchers to draw more accurate inferences concerning the relative contribution (i.e., relative importance) among multiple (often correlated) predictor variables in a regression analysis. One such development has been relative weight analysis (RWA). Researchers can use a RWA to decompose the total variance predicted in a regression model (R 2 ) into weights that accurately reflect the proportional contribution of the various predictor variables. Prior to RWA, researchers were forced to rely on traditional statistics (e.g., correlations; standardized regression weights), which are known to yield faulty or misleading information concerning variable importance (especially when predictor variables are correlated with one another, which is often the case in organizational research). Although there has been a surge of interest in RWA over the last 10 years, integration of this statistical tool into organizational research has been hampered by the lack of a user-friendly statistical package for implementing RWA. Indeed, most popular statistical packages (e.g., SPSS, SAS) have yet to include RWA protocols into their regression modules. The purpose of this paper is to present a new, free, comprehensive, web-based, user-friendly resource, RWA-Web, which may be used by anyone having simple access to the internet. Our paper is structured as a tutorial on using RWA-Web to examine relative importance in the classic multiple regression model, the multivariate multiple regression model, and the logistic regression model. We also illustrate how RWAWeb may be used to conduct null hypothesis significance tests using advanced bootstrapping procedures.
Relative weight analysis is a procedure for estimating the relative importance of correlated predictors in a regression equation. Because the sampling distribution of relative weights is unknown, researchers using relative weight analysis are unable to make judgments regarding the statistical significance of the relative weights. J. W. Johnson (2004) presented a bootstrapping methodology to compute standard errors for relative weights, but this procedure cannot be used to determine whether a relative weight is significantly different from zero. This article presents a bootstrapping procedure that allows one to determine the statistical significance of a relative weight. The authors conducted a Monte Carlo study to explore the Type I error, power, and bias associated with their proposed technique. They illustrate this approach here by applying the procedure to published data.
In describing measures used in their research, authors frequently report having adapted a scale, indicating that they changed something about it. Although such changes can raise concerns about validity, there has been little discussion of this practice in our literature. To estimate the prevalence and identify key forms of scale adaptation, we conducted two studies of the literature. In Study 1, we reviewed the descriptions of all scales ( N = 2,088) in four top journals over a 2-year period. We found that 46% of all scales were reported by authors as adapted and that evidence to support the validity of the adapted scales was presented in 23% of those cases. In Study 2, we chose six scales and examined their use across the literature, which allowed us to identify unreported adaptations. We found that 85% of the administrations of these scales had at least one form of adaptation and many had multiple adaptations. In Study 3, we surveyed editorial board members and a select group of psychometricians to evaluate the extent to which particular adaptations raised concerns about validity and the kinds of evidence needed to support the validity of the adapted scales. To provide guidance for authors who adapt scales and for reviewers and editors who evaluate papers with adapted scales, we present discussions of several forms of adaptations regarding potential threats to validity and recommendations for the kinds of evidence that might best support the validity of the adapted scale (including a reviewer checklist).
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
hi@scite.ai
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.