2022
DOI: 10.1007/s40593-022-00290-6
|View full text |Cite|
|
Sign up to set email alerts
|

Utilizing a Pretrained Language Model (BERT) to Classify Preservice Physics Teachers’ Written Reflections

Abstract: Computer-based analysis of preservice teachers’ written reflections could enable educational scholars to design personalized and scalable intervention measures to support reflective writing. Algorithms and technologies in the domain of research related to artificial intelligence have been found to be useful in many tasks related to reflective writing analytics such as classification of text segments. However, mostly shallow learning algorithms have been employed so far. This study explores to what extent deep … Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1
1

Citation Types

0
4
0

Year Published

2022
2022
2024
2024

Publication Types

Select...
6
2
1

Relationship

0
9

Authors

Journals

citations
Cited by 13 publications
(5 citation statements)
references
References 59 publications
0
4
0
Order By: Relevance
“…Both training datasets were split into 70% training, 15% validation, 15% test (hold-out) for article classification ( Table 1 ) and NER tasks ( Table 2 ). Following common practice, the training sets were used to fine-tune the models on their respective tasks, the validation sets were used to compare the fine-tuned models for selection, and the test sets were used to evaluate how the models perform on unseen data [ 23 ].…”
Section: Methodsmentioning
confidence: 99%
“…Both training datasets were split into 70% training, 15% validation, 15% test (hold-out) for article classification ( Table 1 ) and NER tasks ( Table 2 ). Following common practice, the training sets were used to fine-tune the models on their respective tasks, the validation sets were used to compare the fine-tuned models for selection, and the test sets were used to evaluate how the models perform on unseen data [ 23 ].…”
Section: Methodsmentioning
confidence: 99%
“…An illustrative example of this application is evident in the work of , wherein LR models were employed to identify the causal structure present in students' scienti c explanations. Transformer-based Natural Language Processing (NLP) models, as exempli ed by prominent instances such as BERT and GPT, have become the de facto industry standard for a diverse range of NLP downstream tasks (Cochran, Cohn, Rouet, and Hastings, 2023;Wulff et al, 2023). Prior research (e.g., Cochran et al, 2022) has consistently highlighted the effectiveness of BERT-based transformers in evaluating students' responses to STEM-related questions.…”
Section: Automated Analysis Of Cr Assessmentmentioning
confidence: 99%
“…These methods require less human effort, especially when a reliable scoring rubric has already been applied to many student responses (Haudek et al, 2011). However, the mentioned natural language processing techniques are built on the simplified assumption that the word order is irrelevant to the meaning of a sentence (Wulff et al, 2022b), which complicates the detection of implicit semantic embeddings. So, traditional ML models are only sensitive to key conceptual components, which is why we define construct assessment based on shallow learning experiences as the second level of the ML-adapted ECD.…”
Section: Evidence Spacementioning
confidence: 99%
“…So, future research needs to check whether such cutting-edge techniques can also be used to accurately evaluate short, content-rich scientific explanations. Maybe, such technologies will produce better outcome metrics than the n-gram approach; Dood et al (2022) and Winograd et al (2021bWinograd et al ( , 2021b in chemistry education research as well as Wulff et al (2022aWulff et al ( , 2022b in physics education research have laid the foundation for future research in this area.…”
Section: Validation Approachesmentioning
confidence: 99%