2021
DOI: 10.1037/aca0000319
|View full text |Cite
|
Sign up to set email alerts
|

Measuring divergent thinking originality with human raters and text-mining models: A psychometric comparison of methods.

Abstract: Within creativity research, interest and capability in utilizing text-mining models to quantify the Originality of participant responses to Divergent Thinking tasks has risen sharply over the last decade, with many extant studies fruitfully using such methods to uncover substantive patterns among creativity-relevant constructs. However, no systematic psychometric investigation of the reliability and validity of human-rated Originality scores, and scores from various freely available text-mining systems, exists… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
3
1
1

Citation Types

6
163
4

Year Published

2021
2021
2024
2024

Publication Types

Select...
5
1

Relationship

2
4

Authors

Journals

citations
Cited by 104 publications
(198 citation statements)
references
References 71 publications
6
163
4
Order By: Relevance
“…When listing uses of a brick in the Alternative Uses Task, for example, a brick layer may provide different responses than a lawyer, influencing their scores within the sample and leading to more uncontrollable variation. Similarly, different words in the Alternative Uses Task can prompt different responses with varying reliability between computational scoring and manually scored responses (26). The DAT avoids these issues by giving an open-ended prompt and using an international database.…”
Section: Strengthsmentioning
confidence: 99%
See 4 more Smart Citations
“…When listing uses of a brick in the Alternative Uses Task, for example, a brick layer may provide different responses than a lawyer, influencing their scores within the sample and leading to more uncontrollable variation. Similarly, different words in the Alternative Uses Task can prompt different responses with varying reliability between computational scoring and manually scored responses (26). The DAT avoids these issues by giving an open-ended prompt and using an international database.…”
Section: Strengthsmentioning
confidence: 99%
“…Like all computational scoring methods, the DAT depends on the training model and corpus used. We chose the GloVe algorithm and the Common Crawl corpus; this combination correlates best with human judgements on the Alternative Uses Task (26). For simplicity, we chose a pre-trained model that is freely available and widely used.…”
Section: Future Researchmentioning
confidence: 99%
See 3 more Smart Citations