For at least four decades, researchers have studied the effectiveness of interventions designed to increase well-being. These interventions have become known as positive psychology interventions (PPIs). Two highly cited meta-analyses examined the effectiveness of PPIs on well-being and depression: Sin and Lyubomirsky (2009) and Bolier et al. (2013). Sin and Lyubomirsky reported larger effects of PPIs on well-being ( r = .29) and depression ( r = .31) than Bolier et al. reported for subjective well-being ( r = .17), psychological well-being ( r = .10), and depression ( r = .11). A detailed examination of the two meta-analyses reveals that the authors employed different approaches, used different inclusion and exclusion criteria, analyzed different sets of studies, described their methods with insufficient detail to compare them clearly, and did not report or properly account for significant small sample size bias. The first objective of the current study was to reanalyze the studies selected in each of the published meta-analyses, while taking into account small sample size bias. The second objective was to replicate each meta-analysis by extracting relevant effect sizes directly from the primary studies included in the meta-analyses. The present study revealed three key findings: (1) many of the primary studies used a small sample size; (2) small sample size bias was pronounced in many of the analyses; and (3) when small sample size bias was taken into account, the effect of PPIs on well-being were small but significant (approximately r = .10), whereas the effect of PPIs on depression were variable, dependent on outliers, and generally not statistically significant. Future PPI research needs to focus on increasing sample sizes. A future meta-analyses of this research needs to assess cumulative effects from a comprehensive collection of primary studies while being mindful of issues such as small sample size bias.
A number of studies investigating the relationship between personality and prospective memory (ProM) have appeared during the last decade. However, a review of these studies reveals little consistency in their findings and conclusions. To clarify the relationship between ProM and personality, we conducted two studies: a meta-analysis of prior research investigating the relationships between ProM and personality, and a study with 378 participants examining the relationships between ProM, personality, verbal intelligence, and retrospective memory. Our review of prior research revealed great variability in the measures used to assess ProM, and in the methodological quality of prior research; these two factors may partially explain inconsistent findings in the literature. Overall, the meta-analysis revealed very weak correlations (rs ranging from 0.09 to 0.10) between ProM and three of the Big Five factors: Openness, Conscientiousness, and Agreeableness. Our experimental study showed that ProM performance was related to individual differences such as verbal intelligence as well as to personality factors and that the relationship between ProM and personality factors depends on the ProM subdomain. In combination, the two studies suggest that ProM performance is relatively weakly related to personality factors and more strongly related to individual differences in cognitive factors.
Undergraduate Students' interest in taking quantitative vs. non quantitative courses has received limited attention even though it has important consequences for higher education. Previous studies have collected course interest ratings at the end of the courses as part of student evaluation of teaching (SET) ratings, which may confound prior interest in taking these courses with students' actual experience in taking them. This study is the first to examine undergraduate students' interest in quantitative vs. non quantitative courses in their first year of studies before they have taken any quantitative courses. Three hundred and forty students were presented with descriptions of 44 psychology courses and asked to rate their interest in taking each course. Student interest in taking quantitative vs non quantitative courses was very low; the mean interest in statistics courses was nearly 6 SDs below the mean interest in non quantitative courses. Moreover, women were less interested in taking quantitative courses than men. Our findings have several far-reaching implications. First, evaluating professors teaching quantitative vs. non quantitative courses against the same SET standard may be inappropriate. Second, if the same SET standard is used for the evaluation of faculty teaching quantitative vs. non quantitative courses, faculty are likely to teach to SETs rather than focus on student learning. Third, universities interested primarily in student satisfaction may want to expunge quantitative courses from their curricula. In contrast, universities interested in student learning may want to abandon SETs as a primary measure of faculty teaching effectiveness. Fourth, undergraduate students who are not interested in taking quantitative courses are unlikely to pursue graduate studies in quantitative psychology and unlikely to be able to competently analyze data independently.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
hi@scite.ai
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.