2019
DOI: 10.1037/aca0000233
|View full text |Cite
|
Sign up to set email alerts
|

Creativity assessment in psychological research: (Re)setting the standards.

Abstract: This commentary discusses common relevant themes that have been highlighted across contributions in this special issue on "Creativity Assessment: Pitfalls, Solutions, and Standards." We first highlight the challenges of operationalizing creativity through the use of a range of measurement approaches that are simply not tapping into the same aspect of creativity. We then discuss pitfalls and challenges of the three most popular measurement methods employed in the field, namely divergent thinking tasks, productb… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

1
85
0
2

Year Published

2019
2019
2021
2021

Publication Types

Select...
7
1

Relationship

1
7

Authors

Journals

citations
Cited by 96 publications
(96 citation statements)
references
References 58 publications
(150 reference statements)
1
85
0
2
Order By: Relevance
“…In field studies, innovation outcomes are usually rated either by the employees themselves (e.g., Axtell et al, 2000 ; Zacher and Wilden, 2014 ; Zacher et al, 2016 ) or their supervisors (e.g., Janssen, 2000 ; Shalley et al, 2004 ). These two methods have a number of limitations ( Hülsheger et al, 2009 ; Barbot et al, 2019 ). Most importantly, self-ratings of innovation performance are correlated with motivation and self-efficacy for innovation and, therefore, their validity can be questioned ( Reiter-Palmon et al, 2012 , 2019 ; Barbot et al, 2019 ).…”
Section: Discussionmentioning
confidence: 99%
See 1 more Smart Citation
“…In field studies, innovation outcomes are usually rated either by the employees themselves (e.g., Axtell et al, 2000 ; Zacher and Wilden, 2014 ; Zacher et al, 2016 ) or their supervisors (e.g., Janssen, 2000 ; Shalley et al, 2004 ). These two methods have a number of limitations ( Hülsheger et al, 2009 ; Barbot et al, 2019 ). Most importantly, self-ratings of innovation performance are correlated with motivation and self-efficacy for innovation and, therefore, their validity can be questioned ( Reiter-Palmon et al, 2012 , 2019 ; Barbot et al, 2019 ).…”
Section: Discussionmentioning
confidence: 99%
“…According to the definition of creativity as novel ideas (Amabile, 1988;Amabile and Pratt, 2016), creativity performance was measured as a percentage of the number of new ideas compared to the number of total ideas generated by each participant (Hagtvedt et al, 2016). In line with existing research on brainstorming tasks, we used a rater based assessment for this purpose (Barbot et al, 2019;Reiter-Palmon et al, 2019). First, a trained research assistant (coder one) counted all the ideas developed during the creativity task for each participant.…”
Section: Dependent Variablementioning
confidence: 99%
“…Creativity researchers have long grappled with how to measure creativity. Indeed, the question of how to best capture creativity remains open and active, with a recent special issue on creativity assessment recently published in Psychology of Aesthetics, Creativity, and the Arts (Barbot, Hass, & Reiter-Palmon, 2019). Over the years, a range of assessment approaches have been developed, from methods that rely on experts to judge the creative quality of products (i.e., the Consensual Assessment Technique; (Amabile, 1983;Cseh & Jeffries, 2019) to frequency-based methods that use standardized norms (Forthmann, Paek, Dumas, Barbot, & Holling, 2019;Torrance, 1972) to subjective scoring methods that rely on layperson judgements (Silvia et al, 2008).…”
Section: An Open Platform For Computing Semantic Distancementioning
confidence: 99%
“…It has also been pointed out that creativity is a complex, composite, and multidimensional construct, and it has been argued that it cannot be addressed only by qualitative observations, highlighting the need for the use of quantitative standardized methods (Torrance, 1988;Dietrich, 2004;Palmiero et al, 2012;Abraham, 2016;Barbot et al, 2019). Consequently, the construct of Divergent Thinking (DT), first proposed by Guilford in 1956, has been considered in a great number of experimental studies.…”
Section: Introductionmentioning
confidence: 99%
“…Nowadays, researchers consider DT as an indicator of creative potential (Runco and Acar, 2012;Acar and Runco, 2019): thus, not a synonym for creativity but an affordable predictor of future creative achievement (Kim, 2008). Furthermore, DT it is widely employed in the experimental field because it is believed to elicit the cognitive processes that lead to creative idea generation (Barbot et al, 2019;Benedek and Fink, 2019) and because it can be easily measured by psychometrical tools (Torrance, 1988;Acar and Runco, 2019;Barbot et al, 2019).…”
Section: Introductionmentioning
confidence: 99%