The automatic or blind inclusion of control variables in multiple regression and other analyses, intended to purify observed relationships among variables of interest, is widespread and can be considered an example of practice based on a methodological urban legend. Inclusion of such variables in most cases implicitly assumes that the control variables are somehow either contaminating the measurement of the variables of interest or affecting the underlying constructs, thus distorting observed relationships among them. There are, however, a number of alternative mechanisms that would produce the same statistical results, thus throwing into question whether inclusion of control variables has led to more or less accurate interpretation of results. The authors propose that researchers should be explicit rather than implicit regarding the role of control variables and match hypotheses precisely to both the choice of variables and the choice of analyses. The authors further propose that researchers avoid testing models in which demographic variables serve as proxies for variables that are of real theoretical interest in their data.
We describe the construction of a Job in General (JIG) scale, a global scale to accompany the facet scales of the Job Descriptive Index. We applied both traditional and item response theory procedures for item analysis to data from three large heterogeneous samples (N = 1,149, 3,566, and 4,490). Alpha was .91 and above for the resulting 18-item scale in successive samples. Convergent and discriminant validity and differential response to treatments were demonstrated. Global scales are contrasted with composite and with facet scales in psychological measurement. We show that global scales are not equivalent to summated facet scales. Both facet and global scales were useful in another organization (N = 648). Some principles are suggested for choosing specific (facet), composite, or global measures for practical and theoretical problems. The correlations between global and facet scales suggest that work may be the most important facet in relation to general job satisfaction.
The issue of publication bias in psychological science is one that has remained difficult to address despite decades of discussion and debate. The current article examines a sample of 91 recent meta-analyses published in American Psychological Association and Association for Psychological Science journals and the methods used in these analyses to identify and control for publication bias. Of the 91 studies analyzed, 64 (70%) made some effort to analyze publication bias, and 26 (41%) reported finding evidence of bias. Approaches to controlling publication bias were heterogeneous among studies. Of these studies, 57 (63%) attempted to find unpublished studies to control for publication bias. Nonetheless, those studies that included unpublished studies were just as likely to find evidence for publication bias as those that did not. Furthermore, authors of meta-analyses themselves were overrepresented in unpublished studies acquired, as compared with published studies, suggesting that searches for unpublished studies may increase rather than decrease some sources of bias. A subset of 48 meta-analyses for which study sample sizes and effect sizes were available was further analyzed with a conservative and newly developed tandem procedure of assessing publication bias. Results indicated that publication bias was worrisome in about 25% of meta-analyses. Meta-analyses that included unpublished studies were more likely to show bias than those that did not, likely due to selection bias in unpublished literature searches. Sources of publication bias and implications for the use of meta-analysis are discussed.
This study investigates the extent to which job applicants fake their responses on personality tests. Thirty-three studies that compared job applicant and non-applicant personality scale scores were meta-analyzed. Across all job types, applicants scored significantly higher than non-applicants on extraversion (d 5 .11), emotional stability (d 5 .44), conscientiousness (d 5 .45), and openness (d 5 .13). For certain jobs (e.g., sales), however, the rank ordering of mean differences changed substantially suggesting that job applicants distort responses on personality dimensions that are viewed as particularly job relevant. Smaller mean differences were found in this study than those reported by Viswesvaran and Ones (Educational and Psychological Measurement, 59(2),(197)(198)(199)(200)(201)(202)(203)(204)(205)(206)(207)(208)(209)(210), who compared scores for induced ''fake-good'' vs. honest response conditions. Also, direct Big Five measures produced substantially larger differences than did indirect Big Five measures.
Institution at which work was performed: University of South FloridaStudy Objectives: Mounting evidence implicates disturbed sleep or lack of sleep as one of the risk factors for Alzheimer's disease (AD), but the extent of the risk is uncertain. We conducted a broad systematic review and meta-analysis to quantify the effect of sleep problems/disorders on cognitive impairment and AD. Methods: Original published literature assessing any association of sleep problems or disorders with cognitive impairment or AD was identified by searching PubMed, Embase, Web of Science, and the Cochrane library. Effect estimates of individual studies were pooled and relative risks (RR) and 95% confidence intervals (CI) were calculated using random effects models. We also estimated the population attributable risk. Results: Twenty-seven observational studies (n = 69 216 participants) that provided 52 RR estimates were included in the meta-analysis. Individuals with sleep problems had a 1.55 (95% CI: 1.25-1.93), 1.65 (95% CI: 1.45-1.86), and 3.78 (95% CI: 2.27-6.30) times higher risk of AD, cognitive impairment, and preclinical AD than individuals without sleep problems, respectively. The overall meta-analysis revealed that individuals with sleep problems had a 1.68 (95% CI: 1.51-1.87) times higher risk for the combined outcome of cognitive impairment and/or AD. Approximately 15% of AD in the population may be attributed to sleep problems. Conclusion:This meta-analysis confirmed the association between sleep and cognitive impairment or AD and, for the first time, consolidated the evidence to provide an "average" magnitude of effect. As sleep problems are of a growing concern in the population, these findings are of interest for potential prevention of AD.
Risk-related antecedent variables can be linked to later alcohol consumption by memory processes, and alcohol expectancies may be one relevant memory content. To advance research in this area, it would be useful to apply current memory models such as semantic network theory to explain drinking decision processes. We used multidimensional scaling (MDS) to empirically model a preliminary alcohol expectancy semantic network, from which a theoretical account of drinking decision making was generated. Subanalyses (PREFMAP) showed how individuals with differing alcohol consumption histories may have had different association pathways within the expectancy network. These pathways may have, in turn influenced future drinking levels and behaviors while the person was under the influence of alcohol. All individuals associated positive/prosocial effects with drinking, but heavier drinkers indicated arousing effects as their highest probability associates, whereas light drinkers expected sedation. An important early step in this MDS modeling process is the determination of iso-meaning expectancy adjective groups, which correspond to theoretical network nodes.
Covariance structure modeling, also known as structural equation modeling or causal modeling, appears increasingly popular. Such techniques can be used to conduct tests of complex theory on empirical data. To conduct such tests, researchers need measures of known factor structure and the knowledge of structural relations among the constructs of interest. Researchers typically have neither the required measures nor the knowledge of structural relations. Instead of conducting tests of theory, researchers use covariance structure models to develop measurements and theoretical models. This paper discusses why such use of covariance structure models is unlikely to produce scientific progress and proposes some alternative procedures thought to be more fruitful.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
hi@scite.ai
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.