2020
DOI: 10.3758/s13428-020-01453-w
|View full text |Cite
|
Sign up to set email alerts
|

Automating creativity assessment with SemDis: An open platform for computing semantic distance

Abstract: Creativity research requires assessing the quality of ideas and products. In practice, conducting creativity research often involves asking several human raters to judge participants' responses to creativity tasks, such as judging the novelty of ideas from the alternate uses task (AUT). Although such subjective scoring methods have proved useful, they have two inherent limitationslabor cost (raters typically code thousands of responses) and subjectivity (raters vary on their perceptions and preferences)raising… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
2
1

Citation Types

14
226
2
4

Year Published

2020
2020
2024
2024

Publication Types

Select...
7
1
1

Relationship

1
8

Authors

Journals

citations
Cited by 190 publications
(294 citation statements)
references
References 85 publications
14
226
2
4
Order By: Relevance
“…Since DT is considered to be a marker of creative potential ( Runco and Acar 2012 ), the assessment of the originality of the responses seems required (e.g., Zeng et al 2011 ) and, as a matter of fact, is provided in most of the recent studies. All scoring methods have their weaknesses, and the development of new methods (e.g., based on corpus semantics; Beaty and Johnson 2020 ; Dumas et al 2020 ) continues, and we can be curious about how the weaknesses of the current methods will be mitigated.…”
Section: Discussionmentioning
confidence: 99%
“…Since DT is considered to be a marker of creative potential ( Runco and Acar 2012 ), the assessment of the originality of the responses seems required (e.g., Zeng et al 2011 ) and, as a matter of fact, is provided in most of the recent studies. All scoring methods have their weaknesses, and the development of new methods (e.g., based on corpus semantics; Beaty and Johnson 2020 ; Dumas et al 2020 ) continues, and we can be curious about how the weaknesses of the current methods will be mitigated.…”
Section: Discussionmentioning
confidence: 99%
“…The original answers were mostly mentioned by only 5% or less of the children, while most of the unoriginal responses were produced by 15 to 20% or more, which is clearly above the 10% threshold. However, to avoid a lack of measurement precision, intersubjective rating of originality or automated scoring is recommended in future research on creativity (for more information, see Beaty and Johnson 2020 or Dumas et al 2020 ). Second, the task instruction may have primed the children in the high stimulus condition too strongly to outward perceptual processing while a mix of outward and inward memory-based processing could have promoted originality better.…”
Section: Discussionmentioning
confidence: 99%
“…To address these limitations, recent efforts have moved towards using computational algorithms to score responses (10,16,25,26). Computational methods may also improve the theoretical grounding of the measures, as the assumptions required to score the responses must be made explicit in the program code (10,12).…”
Section: Main Text 31 Introductionmentioning
confidence: 99%
“…Computational methods may also improve the theoretical grounding of the measures, as the assumptions required to score the responses must be made explicit in the program code (10,12). Researchers have successfully scored the Alternative Uses Task using semantic models, achieving scores similar to human ratings (10,16,25,26).…”
Section: Main Text 31 Introductionmentioning
confidence: 99%