Divergent thinking (DT) is an important constituent of creativity that captures aspects of fluency and originality. The literature lacks multivariate studies that report relationships between DT and its aspects with relevant covariates, such as cognitive abilities, personality traits (e.g. openness), and insight. In two multivariate studies (N = 152 and N = 298), we evaluate competing measurement models for a variety of DT tests and examine the relationship between DT and established cognitive abilities, personality traits, and insight. A nested factor model with a general DT and a nested originality factor described the data well. In Study 1, DT was moderately related with working memory, fluid intelligence, crystallized intelligence, and mental speed. In Study 2, we replicate these results and add insight, openness, extraversion, and honesty-humility as covariates. DT was associated with insight, extraversion, and honesty-humility, whereas crystallized intelligence mediated the relationship between openness and DT. In contrast, the nested originality factor (i.e. the specificity of originality tasks beyond other DT tasks) had low variance and was not meaningfully related with any other constructs in the nomological net. We highlight avenues for future research by discussing issues of measurement and scoring.
Intelligence has been declared as a necessary but not sufficient condition for creativity, which was subsequently (erroneously) translated into the so-called threshold hypothesis. This hypothesis predicts a change in the correlation between creativity and intelligence at around 1.33 standard deviations above the population mean. A closer inspection of previous inconclusive results suggests that the heterogeneity is mostly due to the use of suboptimal data analytical procedures. Herein, we applied and compared three methods that allowed us to handle intelligence as a continuous variable. In more detail, we examined the threshold of the creativity-intelligence relation with (a) scatterplots and heteroscedasticity analysis, (b) segmented regression analysis, and (c) local structural equation models in two multivariate studies (N1 = 456; N2 = 438). We found no evidence for the threshold hypothesis of creativity across different analytical procedures in both studies. Given the problematic history of the threshold hypothesis and its unequivocal rejection with appropriate multivariate methods, we recommend the total abandonment of the threshold.
Overclaiming has been described as people's tendency to overestimate their cognitive abilities in general and their knowledge in particular. We discuss four different perspectives on the phenomenon of overclaiming that have been proposed in the research literature: Overclaiming as a result of a) self-enhancement tendencies, b) as a cognitive bias (e.g., hindsight bias, memory bias), c) as proxy for cognitive abilities, and d) as sign of creative engagement. Moreover, we discuss two different scoring methods for an OCQ (signal detection theory vs. familiarity ratings). To distinguish between the different viewpoints of what overclaiming is, we juxtaposed overclaiming, as indicated by claiming familiarity with non-existent terms, with fluid and crystallized intelligence, self-reported knowledge, creativity, faking ability, and personality. Overclaiming was measured with a newly comprised overclaiming questionnaire. Results of several latent variable analyses based upon a multivariate study with 298 participants were: First, overclaiming is neither predicted by honesty-humility nor faking ability and therefore reflects something different than mere self-enhancement tendencies. Second, overclaiming is not predicted by crystallized intelligence, but is highly predictive of self-reported knowledge and, thus, not suitable as an index or a proxy for cognitive abilities. Finally, overclaiming is neither related to divergent thinking and originality, and only moderately predicted by self-reported openness creativity from the HEXACO which means that overclaiming does not reflect creative ability. In sum, our results favor an interpretation of overclaiming as a phenomenon that requires more than self-enhancement motivation, in contrast to the claim that was initially proposed in the literature.
Abstract. Unproctored, web-based assessments are frequently compromised by a lack of control over the participants’ test-taking behavior. It is likely that participants cheat if personal consequences are high. This meta-analysis summarizes findings on context effects in unproctored and proctored ability assessments and examines mean score differences and correlations between both assessment contexts. As potential moderators, we consider (a) the perceived consequences of the assessment, (b) countermeasures against cheating, (c) the susceptibility to cheating of the measure itself, and (d) the use of different test media. For standardized mean differences, a three-level random-effects meta-analysis based on 109 effect sizes from 49 studies (total N = 100,434) identified a pooled effect of Δ = 0.20, 95% CI [0.10, 0.31], indicating higher scores in unproctored assessments. Moderator analyses revealed significantly smaller effects for measures that are difficult to research on the Internet. These results demonstrate that unproctored ability assessments are biased by cheating. Unproctored assessments may be most suitable for tasks that are difficult to search on the Internet.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
hi@scite.ai
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.