2021
DOI: 10.1007/s12528-021-09298-8
|View full text |Cite
|
Sign up to set email alerts
|

Improving prediction of students’ performance in intelligent tutoring systems using attribute selection and ensembles of different multimodal data sources

Abstract: The aim of this study was to predict university students’ learning performance using different sources of performance and multimodal data from an Intelligent Tutoring System. We collected and preprocessed data from 40 students from different multimodal sources: learning strategies from system logs, emotions from videos of facial expressions, allocation and fixations of attention from eye tracking, and performance on posttests of domain knowledge. Our objective was to test whether the prediction could be improv… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

0
20
0

Year Published

2022
2022
2024
2024

Publication Types

Select...
5
1

Relationship

2
4

Authors

Journals

citations
Cited by 14 publications
(20 citation statements)
references
References 39 publications
(45 reference statements)
0
20
0
Order By: Relevance
“…In a subsequent study, Cerezo’s team ( Chango et al, 2021 ) collected and preprocessed data from 40 students using different multimodal sources: learning strategies from log files, emotions from videos of facial expressions, allocation and fixations of attention from eye tracking, and performance on posttests of domain knowledge. They used multimodal data to test whether the prediction could be improved by using attribute selection and classification ensembles of the students’ processes.…”
Section: Current Extensions Of Metatutor—metatutoresmentioning
confidence: 99%
See 1 more Smart Citation
“…In a subsequent study, Cerezo’s team ( Chango et al, 2021 ) collected and preprocessed data from 40 students using different multimodal sources: learning strategies from log files, emotions from videos of facial expressions, allocation and fixations of attention from eye tracking, and performance on posttests of domain knowledge. They used multimodal data to test whether the prediction could be improved by using attribute selection and classification ensembles of the students’ processes.…”
Section: Current Extensions Of Metatutor—metatutoresmentioning
confidence: 99%
“…We have also used unsupervised machine learning techniques to examine ( Lallé et al, 2018 , 2021 ; Wortha et al, 2019 ; Wiedbusch and Azevedo, 2020 ) complex eye-tracking data and facial expressions of emotions during learning with MetaTutor. We continue to use non-traditional statistical techniques, including dynamical systems modeling ( Dever et al, in press ) to examine learners’ emergent SRL behaviors, and MLAs to predict performance at the end of the learning session ( Mu et al, 2020 ; Saint et al, 2020 ; Chango et al, 2021 ; Fan et al, 2021 ). Despite our ability to continuously adapt and use contemporary analytical techniques that emerge from the computational, engineering, psychological, statistical, and data sciences, we as a field are still faced with a major barrier that continues to impact the educational effectiveness of intelligent systems such as MetaTutor.…”
Section: Contributions To the Field Of Self-regulated Learning And In...mentioning
confidence: 99%
“…In hybrid or semi‐in‐person education, the work by Chango, Cerezo, and Romero (2021); Chango, Cerezo, Sanchez‐Santillan, et al (2021) stands out for the fusion of different types of class recordings with data obtained through Moodle, while Xu et al (2019) fused video and text of the teacher both explaining various ideas in class and answering students' questions. The study by J. Chen et al (2014) stood out by including probably the greatest number and widest variety of data sources to fuse, including posture, gaze, electrodermal activity, and student evaluation data.…”
Section: Multimodal Datamentioning
confidence: 99%
“…Five of the studies which appeared in the early fusion category stand out for going beyond simple concatenation of features with rather more detailed procedures. Four of those studies were configured to select the best features of each data source (Chango, Cerezo, & Romero, 2021; Chango, Cerezo, Sanchez‐Santillan, et al, 2021; N. L. Henderson, Rowe, Mott, Brawner, et al, 2019; N. Henderson et al, 2020). In contrast, N. L. Henderson, Rowe, Mott, and Lester (2019), reduced the dimensionality of the features using principal component analysis (PCA) in two different configurations: (a) they concatenated all of the features of the sources and applied PCA to the resulting vector; (b) they applied PCA to the features of each source first and concatenated the results following the reduction of dimensionality.…”
Section: Data Fusion Techniques In Multimodal La/edmmentioning
confidence: 99%
See 1 more Smart Citation