The platform will undergo maintenance on Sep 14 at about 7:45 AM EST and will be unavailable for approximately 2 hours.
2020
DOI: 10.1002/tea.21658
|View full text |Cite
|
Sign up to set email alerts
|

From substitution to redefinition: A framework of machine learning‐based science assessment

Abstract: This study develops a framework to conceptualize the use and evolution of machine learning (ML) in science assessment. We systematically reviewed 47 studies that applied ML in science assessment and classified them into five categories: (a) constructed response, (b) essay, (c) simulation, (d) educational game, and (e) inter‐discipline. We compared the ML‐based and conventional science assessments and extracted 12 critical characteristics to map three variables in a three‐dimensional framework: construct, funct… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1

Citation Types

2
62
0
1

Year Published

2021
2021
2024
2024

Publication Types

Select...
7

Relationship

4
3

Authors

Journals

citations
Cited by 61 publications
(75 citation statements)
references
References 80 publications
2
62
0
1
Order By: Relevance
“…Potential risk of misrepresenting the construct of interest. In their study, Zhai et al (2020b) suggest that most ML-based NGSAs target complex and structural constructs of science learning. Complexity, according to Bloom's taxonomy (Forehand, 2010), denotes the rank of cognitive demands for science learning goals.…”
Section: Cognitive Validity: Targeting the Three-dimensionality Of Science Learningmentioning
confidence: 99%
See 2 more Smart Citations
“…Potential risk of misrepresenting the construct of interest. In their study, Zhai et al (2020b) suggest that most ML-based NGSAs target complex and structural constructs of science learning. Complexity, according to Bloom's taxonomy (Forehand, 2010), denotes the rank of cognitive demands for science learning goals.…”
Section: Cognitive Validity: Targeting the Three-dimensionality Of Science Learningmentioning
confidence: 99%
“…However, it will be critical to have evidence that the algorithmic models developed based on one set of responses are applicable to another set of responses. Researchers can select using self-, split-, or cross-validation approaches based on research purpose, but the cross-validation approach was found to be most frequently used (Zhai et al 2020b). Cross-validation requires partitioning the data into n groups and then using (n−1) groups to train the machine while testing the algorithmic model using the remaining data.…”
Section: A Validity Inferential Network For Machine Learning-based Science Assessmentsmentioning
confidence: 99%
See 1 more Smart Citation
“…Unfortunately, CR is both time and resource consuming to score compared with multiple-choice items, and thus, teachers may not be willing to implement CR items in their classrooms. Approaches that employ machine learning have shown great potential in automatically scoring CR assessments (Zhai et al, 2020a). As indicated in a recent review study (Zhai et al, 2020c), machine learning has been adopted in many science assessment practices using CRs, essays, educational games, and interdisciplinary assessments (e.g., Lee et al, 2019a;Nehm et al, 2012).…”
mentioning
confidence: 99%
“…While the potential of machine learning has been recognized, few studies have tackled the true challenge of scoring CR items on multi-dimensional science assessments. There are relatively few studies applying machine learning to analyze assessment items in which students perform tasks that require the use of multiple dimensions of scientific knowledge to make sense of phenomena (Zhai et al, 2020a). In addition, none of the studies explicitly document whether and how these assessments measure the dimensionalities of science learning.…”
mentioning
confidence: 99%