2021
DOI: 10.1007/s10956-021-09901-8
|View full text |Cite
|
Sign up to set email alerts
|

Practices and Theories: How Can Machine Learning Assist in Innovative Assessment Practices in Science Education

Abstract: As cutting-edge technologies, such as machine learning (ML), are increasingly involved in science assessments, it is essential to conceptualize how assessment practices are innovated by technologies. To partially meet this need, this article focuses on ML-based science assessments and elaborates on how ML innovates assessment practices in science education. The article starts with an articulation of the "practice" nature of assessment both of learning and for learning, identifying four essential assessment pra… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1
1

Citation Types

0
24
0

Year Published

2021
2021
2024
2024

Publication Types

Select...
5
1

Relationship

2
4

Authors

Journals

citations
Cited by 40 publications
(31 citation statements)
references
References 63 publications
(103 reference statements)
0
24
0
Order By: Relevance
“…Chapelle and Voss (2017) remarked that the technological advances in language testing and other natural language-processing evaluations need to show their comparability with other classic psychoeducational tests to improve the current approaches to language assessment (note a similar rationale behind how the term "validity" changed for language assessment in Chapelle, 1999). While there is an important relation between technological advances and language assessment (e.g., Chapelle & Voss, 2016, 2021, it needs to continuously improve the design of computer-assisted language tests and the ways to demonstrate their validity. In this regard, natural language processing (NLP) research and other advances in language testing systematically lack empirical tests of measurement models.…”
Section: Using Psychometrics To Infer Constructs From Computational Indicatorsmentioning
confidence: 99%
See 2 more Smart Citations
“…Chapelle and Voss (2017) remarked that the technological advances in language testing and other natural language-processing evaluations need to show their comparability with other classic psychoeducational tests to improve the current approaches to language assessment (note a similar rationale behind how the term "validity" changed for language assessment in Chapelle, 1999). While there is an important relation between technological advances and language assessment (e.g., Chapelle & Voss, 2016, 2021, it needs to continuously improve the design of computer-assisted language tests and the ways to demonstrate their validity. In this regard, natural language processing (NLP) research and other advances in language testing systematically lack empirical tests of measurement models.…”
Section: Using Psychometrics To Infer Constructs From Computational Indicatorsmentioning
confidence: 99%
“…First, different human raters read the instructional texts and generated ideal summaries. These ideal summaries were then used to extract common and necessary topics from each instructional text by consensus (these conceptual axes have been previously validated in Martínez-Huertas et al, 2018, 2021. Thereafter, an assessment rubric with different conceptual axes was created for each instructional text.…”
Section: Instrumentsmentioning
confidence: 99%
See 1 more Smart Citation
“…Created by a series of seemingly invisible and unconscious human decisions, ML's seemingly benign artifice is shot through with critical assumptions about language, race, and power. CHEUK | 827 ML's promise to "revolutionize" assessments by offering efficiency, reliability and "unbiased" interpretations of students' thinking is at odds with how it polices linguistic borders and place learners into proficiency bins (Zhai, 2021). A closer look reveals ML science assessment practices as reflecting a narrow band of what constitutes as appropriate "language of science," privileging academic language over students' everyday nonspecialist vernacular in how they engage with science ideas.…”
Section: Bias In Science Assessmentsmentioning
confidence: 99%
“…They are subsequently penalized for making sense of the world around them with their nonnormative yet rich linguistic practices. While champions of these abstract ML black boxes point to gains in efficiency and automaticity (Zhai et al, 2020;Zhai, 2021), it would appear that what is really efficient in the move…”
Section: Disempowering Algorithmic Systemsmentioning
confidence: 99%