2019
DOI: 10.1007/s00607-019-00722-7
|View full text |Cite
|
Sign up to set email alerts
|

Towards emotion-sensitive learning cognitive state analysis of big data in education: deep learning-based facial expression analysis using ordinal information

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

0
9
0
1

Year Published

2020
2020
2022
2022

Publication Types

Select...
6
2

Relationship

0
8

Authors

Journals

citations
Cited by 26 publications
(15 citation statements)
references
References 22 publications
0
9
0
1
Order By: Relevance
“… Xu et al (2020) proposed a multitasking learning system using a cascaded CNN, and the objectives tend towards incorporating students attentiveness and students emotion recognition and intensity estimation into an intelligent class system. The first module of the cascaded network handled the preprocessing stages that involve face detection, face alignment and head pose estimation through which attentiveness is determined.…”
Section: Related Workmentioning
confidence: 99%
“… Xu et al (2020) proposed a multitasking learning system using a cascaded CNN, and the objectives tend towards incorporating students attentiveness and students emotion recognition and intensity estimation into an intelligent class system. The first module of the cascaded network handled the preprocessing stages that involve face detection, face alignment and head pose estimation through which attentiveness is determined.…”
Section: Related Workmentioning
confidence: 99%
“…Big data analysis has rendered a chance of analyzing the psychology of a leaner. The authors in Reference 28 propose a novel emotion‐sensitive method to determine the learners' interest based on the position of the head and facial emotions. The major challenge of online learning is the retention rate.…”
Section: Deep Learning For Smart Educationmentioning
confidence: 99%
“…Como trabajos unimodales, se pueden citar a: (van der Haar, 2019) donde se captura el rostro a través de videos (Zatarain Cabada, Barron Estrada, Halor-Hernandez, & Reyes-García, 2014), donde se captura la imagen del rostro (Wei-Long & Bao-Liang, 2015). Como trabajos multimodales, se pueden citar como ejemplos a: (Xu, Chen, Han, Tan, & Xu, 2019) donde agregan a la captura del rostro la posición de la cabeza, (Alepis & Virvou, 2006) donde utilizan cuestionarios, tecleo y micrófono, (Salmeron-Majadas, Santos, & Boticario, 2014) donde utilizan movimientos del mouse y teclado y, (Calot, Ierache, & Hasperué, 2019) donde se realiza la captura del tecleo. En la utilización de estados afectivos utilizando enfoques multimodales, existen distintos desafíos, como ser: decidir qué modalidades combinar, recopilar datos de entrenamiento, manejar datos faltantes, manejar diferentes tasas de muestreo e interdependencia de modalidades al construir modelos, decidir cómo fusionar datos de diferentes modalidades, decidir cómo evaluar los estados emocionales.…”
Section: Métodosunclassified