2018
DOI: 10.1007/978-3-319-91464-0_15
|View full text |Cite
|
Sign up to set email alerts
|

Predicting Learners’ Emotions in Mobile MOOC Learning via a Multimodal Intelligent Tutor

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
2
1

Citation Types

0
7
0

Year Published

2019
2019
2023
2023

Publication Types

Select...
5
3
1

Relationship

0
9

Authors

Journals

citations
Cited by 15 publications
(7 citation statements)
references
References 21 publications
0
7
0
Order By: Relevance
“…The fluctuation of emotions is often ignored in technology-enhanced learning research [66]. An important research direction is evaluating the feasibility of employing ubiquitous technologies, such as smartphones and web cameras, to measure learners' photoplethysmography (PPG) signals and facial expressions [67] in a nonintrusive way, thereby supplementing self-reported, clickstream, and forum textual data to enhance emotional engagement. Another promising line of research is applying the techniques of natural learning processing and evaluating the feasibility of adopting a virtual assistant to capture emotional engagement throughout a MOOC [68].…”
Section: Discussionmentioning
confidence: 99%
“…The fluctuation of emotions is often ignored in technology-enhanced learning research [66]. An important research direction is evaluating the feasibility of employing ubiquitous technologies, such as smartphones and web cameras, to measure learners' photoplethysmography (PPG) signals and facial expressions [67] in a nonintrusive way, thereby supplementing self-reported, clickstream, and forum textual data to enhance emotional engagement. Another promising line of research is applying the techniques of natural learning processing and evaluating the feasibility of adopting a virtual assistant to capture emotional engagement throughout a MOOC [68].…”
Section: Discussionmentioning
confidence: 99%
“…Another strand of work focused on inferring learners' cognitive and affective states in mobile-based MOOCs captures photoplethysmography (PPG) signals and facial expression data , 2018a, 2018bXiao & Wang, 2015. AttentiveLearner uses on-lens finger gestures on a back camera for video control (i.e., a video plays when the camera lens is covered and pauses if it is uncovered).…”
Section: Online Video Lectures and State-of-the-art La: Challenges An...mentioning
confidence: 99%
“…The initial results related to how instructors can identify confusing topics for learners based on overall feedback (i.e., aggregated cognitive states inferred from PPG signals) were demonstrated. The advanced version of the system, AttentiveLearner 2 , uses both back and front cameras to monitor PPG signals and track learners' facial expressions, respectively, to infer learners' cognitive-affective states in real time (Pham & Wang, 2018b). The two channels are used to achieve more robust emotion detection.…”
Section: Online Video Lectures and State-of-the-art La: Challenges An...mentioning
confidence: 99%
“…Giving machines the capability to understand human language effectively opens new horizons for human-machine conversation systems [1], tutoring systems [2], and health care [3], to name a few applications.…”
Section: Introductionmentioning
confidence: 99%