This study examines the relationship between psychosocial and study skill factors (PSFs) and college outcomes by meta-analyzing 109 studies. On the basis of educational persistence and motivational theory models, the PSFs were categorized into 9 broad constructs: achievement motivation, academic goals, institutional commitment, perceived social support, social involvement, academic self-efficacy, general self-concept, academic-related skills, and contextual influences. Two college outcomes were targeted: performance (cumulative grade point average; GPA) and persistence (retention). Meta-analyses indicate moderate relationships between retention and academic goals, academic self-efficacy, and academic-related skills (ps =.340,.359, and.366, respectively). The best predictors for GPA were academic self-efficacy and achievement motivation (ps =.496 and.303, respectively). Supplementary regression analyses confirmed the incremental contributions of the PSF over and above those of socioeconomic status, standardized achievement, and high school GPA in predicting college outcomes.
Theoretically, low correlations between implicit and explicit measures can be due to (a) motivational biases in explicit self reports, (b) lack of introspective access to implicitly assessed representations, (c) factors influencing the retrieval of information from memory, (d) method-related characteristics of the two measures, or (e) complete independence of the underlying constructs. The present study addressed these questions from a meta-analytic perspective, investigating the correlation between the Implicit Association Test (IAT) and explicit self-report measures. Based on a sample of 126 studies, the mean effect size was .24, with approximately half of the variability across correlations attributable to moderator variables. Correlations systematically increased as a function of (a) increasing spontaneity of self-reports and (b) increasing conceptual correspondence between measures. These results suggest that implicit and explicit measures are generally related but that higher order inferences and lack of conceptual correspondence can reduce the influence of automatic associations on explicit self-reports.
Range restriction in most data sets is indirect, but the meta-analysis methods used to date have applied the correction for direct range restriction to data in which range restriction is indirect. The authors show that this results in substantial undercorrections for the effects of range restriction, and they present meta-analysis methods for making accurate corrections when range restriction is indirect. Applying these methods to a well-known large-sample empirical database, the authors estimate that previous meta-analyses have underestimated the correlation between general mental ability and job performance by about 25%, indicating that this is potentially an important methodological issue in meta-analysis in general.
On the basis of an empirical study of measures of constructs from the cognitive domain, the personality domain, and the domain of affective traits, the authors of this study examine the implications of transient measurement error for the measurement of frequently studied individual differences variables. The authors clarify relevant reliability concepts as they relate to transient error and present a procedure for estimating the coefficient of equivalence and stability (L. J. Cronbach, 1947), the only classical reliability coefficient that assesses all 3 major sources of measurement error (random response, transient, and specific factor errors). The authors conclude that transient error exists in all 3 trait domains and is especially large in the domain of affective traits. Their findings indicate that the nearly universal use of the coefficient of equivalence (Cronbach's alpha; L. J. Cronbach, 1951), which fails to assess transient error, leads to overestimates of reliability and undercorrections for biases due to measurement error.
The relationships between personality traits and performance are often assumed to be linear. This assumption has been challenged conceptually and empirically, but results to date have been inconclusive. In the current study, we took a theory-driven approach in systematically addressing this issue. Results based on two different samples generally supported our expectations of the curvilinear relationships between personality traits, including Conscientiousness and Emotional Stability, and job performance dimensions, including task performance, organizational citizenship behavior, and counterproductive work behaviors. We also hypothesized and found that job complexity moderated the curvilinear personality–performance relationships such that the inflection points after which the relationships disappear were lower for low-complexity jobs than they were for high-complexity jobs. This finding suggests that high levels of the two personality traits examined are more beneficial for performance in high- than low-complexity jobs. We conclude by discussing the implications of these findings for the use of personality in personnel selection.
The authors report on a large-scale study examining the effects of self-reported psychosocial factors on 1st-year college outcomes. Using a sample of 14,464 students from 48 institutions, the authors constructed hierarchical regression models to measure the predictive validity of the Student Readiness Inventory, a measure of psychosocial factors. Controlling for institutional effects and traditional predictors, the authors tested the effects of motivational and skill, social, and self-management measures on academic performance and retention. Academic Discipline was incrementally predictive of academic performance (grade-point average) and retention. Social Activity and Emotional Control also helped predict academic performance and retention, whereas Commitment to College and Social Connection offered incremental prediction of retention. This study elaborates recent meta-analytic findings (S. Robbins et al., 2004), demonstrating the salience of a subset of motivational, social, and self-management factors. Future research questions include how measures of psychosocial factors can be used to aid students, the salience of these measures over the entire college experience and for predicting job performance, and the need for testing theoretical models for explaining postsecondary educational outcomes incorporating traditional, motivational, self-management, and social engagement factors.With the scheduled revamping of the Higher Education Act, the accountability of postsecondary institutions for student academic performance and dropout has received much attention in professional reports as well as popular and research literatures (is concern that college students are ill prepared to meet the hurdles they face upon entry into college. To counter these problems, researchers have suggested the tying of federal (Stedman, 2003) and state (Hearn & Holdsworth, 2002) funding to outcomes of higher education.In practice, prediction of college success has largely centered on high-stakes testing. In many 4-year colleges and universities, there are many more applicants than spots, and high school academic performance and standardized achievement test scores are heavily weighted in admission decisions. A significant debate is occurring over how these indicators should be used. It has been argued that certain groups are disadvantaged by standardized test scores
This research reports the results of a comprehensive investigation into the effectiveness of team building. The article serves to update and extend Salas, Rozell, Mullen, and Driskell's (1999) team-building meta-analysis by assessing a larger database and examining a broader set of outcomes. Our study considers the impact of four specific team-building components (goal setting, interpersonal relations, problem solving, and role clarification) on cognitive, affective, process, and performance outcomes. Results (based on 60 correlations) suggest that team building has a positive moderate effect across all team outcomes. In terms of specific outcomes, team building was most strongly related to affective and process outcomes. Results are also presented on the differential effectiveness of team building based upon the team size.
Peer and self-assessment offer an opportunity to scale both assessment and learning to global classrooms. This article reports our experiences with two iterations of the first large online class to use peer and selfassessment. In this class, peer grades correlated highly with staff-assigned grades. The second iteration had 42.9% of students' grades within 5% of the staff grade, and 65.5% within 10%. On average, students assessed their work 7% higher than staff did. Students also rated peers' work from their own country 3.6% higher than those from elsewhere. We performed three experiments to improve grading accuracy. We found that giving students feedback about their grading bias increased subsequent accuracy. We introduce short, customizable feedback snippets that cover common issues with assignments, providing students more qualitative peer feedback. Finally, we introduce a data-driven approach that highlights high-variance items for improvement. We find that rubrics that use a parallel sentence structure, unambiguous wording, and well-specified dimensions have lower variance. After revising rubrics, median grading error decreased from 12.4% to 9.9%.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
hi@scite.ai
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.