LAK21: 11th International Learning Analytics and Knowledge Conference 2021
DOI: 10.1145/3448139.3448168
|View full text |Cite
|
Sign up to set email alerts
|

A Deep Transfer Learning Approach to Modeling Teacher Discourse in the Classroom

Abstract: Teachers, like everyone else, need objective reliable feedback in order to improve their effectiveness. However, developing a system for automated teacher feedback entails many decisions regarding data collection procedures, automated analysis, and presentation of feedback for reflection. We address the latter two questions by comparing two different machine learning approaches to automatically model seven features of teacher discourse (e.g., use of questions, elaborated evaluations). We compared a traditional… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1
1

Citation Types

0
12
0

Year Published

2022
2022
2024
2024

Publication Types

Select...
4
3
2

Relationship

0
9

Authors

Journals

citations
Cited by 23 publications
(12 citation statements)
references
References 58 publications
0
12
0
Order By: Relevance
“…Google has provided the BERT models pre-trained by the large corpora (Wikipedia and Book Corpus) for developers and researchers to fine-tune and apply in their personalised tasks. In learning analytics fields, the BERT model shows great potential to analyse teacher discourse in online classrooms automatically (Jensen et al, 2021). The fine-tuning BERT model was applied to analyse cognitive presence in discussion messages from a computer science MOOC and for-credit course, which reported the promising F 1 scores of 0.95 (Hosmer & Lee, 2021;Lee et al, 2022).…”
Section: Deep Learning and Multi-label Classifiers For Mooc Discussionmentioning
confidence: 99%
“…Google has provided the BERT models pre-trained by the large corpora (Wikipedia and Book Corpus) for developers and researchers to fine-tune and apply in their personalised tasks. In learning analytics fields, the BERT model shows great potential to analyse teacher discourse in online classrooms automatically (Jensen et al, 2021). The fine-tuning BERT model was applied to analyse cognitive presence in discussion messages from a computer science MOOC and for-credit course, which reported the promising F 1 scores of 0.95 (Hosmer & Lee, 2021;Lee et al, 2022).…”
Section: Deep Learning and Multi-label Classifiers For Mooc Discussionmentioning
confidence: 99%
“…Prior work in computationally analyzing classroom discourse has employed a variety of techniques to automatically detect teacher discourse variables. Recent advances in natural language processing has led to a larger presence of work applying neural methods with varying levels of success in detecting classroom discourse variables, such as semantic content, instructional talk, and elaborated evaluation (Jensen et al, 2021;Song et al, 2021). For unsupervised approaches, Demszky et al (2021a), which is also most similar to our work in terms of approach and dataset, propose an unsupervised measure of teachers' uptake of students' contributions, and we use their sample in our annotation for funneling and focusing.…”
Section: Contributionsmentioning
confidence: 98%
“…Prior work in computationally analyzing classroom discourse has employed a variety of techniques to automatically detect teacher discourse variables. Recent advances in natural language processing has led to a larger presence of work applying neural methods with varying levels of success in detecting classroom discourse variables, such as semantic content, instructional talk, and elaborated evaluation (Jensen et al, 2021;. For unsupervised approaches, Demszky et al (2021a), which is also most similar to our work in terms of approach and dataset, propose an unsupervised measure of teachers' uptake of students' contributions, and we use their sample in our annotation for funneling and focusing.…”
Section: Related Workmentioning
confidence: 99%
“…Similarly, ) used a chunking approach where documents were broken down into multiple chunks, and the activations were then combined to perform the tasks. Another recent example is the BERT-Seq model for classifying Collaborative Problem Solving (Pugh et al, 2021). The BERT-Seq model uses a special input representation that combines embeddings from adjacent utterances as contextual cues for the model.…”
Section: Transformers For Additional Context and Long-term Dependenciesmentioning
confidence: 99%