Proceedings of the 16th International Conference on Multimodal Interaction 2014
DOI: 10.1145/2663204.2663264
|View full text |Cite
|
Sign up to set email alerts
|

The Additive Value of Multimodal Features for Predicting Engagement, Frustration, and Learning during Tutoring

Abstract: Detecting learning-centered affective states is difficult, yet crucial for adapting most effectively to users. Within tutoring in particular, the combined context of student task actions and tutorial dialogue shape the student's affective experience. As we move toward detecting affect, we may also supplement the task and dialogue streams with rich sensor data. In a study of introductory computer programming tutoring, human tutors communicated with students through a text-based interface. Automated approaches w… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
4
1

Citation Types

0
21
0
1

Year Published

2017
2017
2021
2021

Publication Types

Select...
5
1

Relationship

1
5

Authors

Journals

citations
Cited by 54 publications
(24 citation statements)
references
References 23 publications
0
21
0
1
Order By: Relevance
“…In fact, the posture-based detectors performed only slightly better than chance, and in the case of some algorithms and D'Mello and Graesser 2009;Grafsgaard et al 2014). There are several possible explanations for why the posture-based predictors were not more effective.…”
Section: Discussion Of Initial Affect Detector Modeling Resultsmentioning
confidence: 99%
See 4 more Smart Citations
“…In fact, the posture-based detectors performed only slightly better than chance, and in the case of some algorithms and D'Mello and Graesser 2009;Grafsgaard et al 2014). There are several possible explanations for why the posture-based predictors were not more effective.…”
Section: Discussion Of Initial Affect Detector Modeling Resultsmentioning
confidence: 99%
“…Several research labs have investigated sensor-based affect recognition in learning environments over the past decade, including work with facial expression , eye tracking (Jaques et al 2014), and posture Grafsgaard et al 2014). In this work, we focus primarily on posture sensor-based models of affect recognition.…”
Section: Sensor-based Detectors Of Learner Engagement and Affectmentioning
confidence: 99%
See 3 more Smart Citations