Proceedings of the 2014 ACM Workshop on Multimodal Learning Analytics Workshop and Grand Challenge 2014
DOI: 10.1145/2666633.2666639
|View full text |Cite
|
Sign up to set email alerts
|

Estimation of Presentations Skills Based on Slides and Audio Features

Abstract: This paper proposes a simple estimation of the quality of student oral presentations. It is based on the study and analysis of features extracted from the audio and digital slides of 448 presentations. The main goal of this work is to automatically predict the values assigned by professors to different criteria in a presentation evaluation rubric. Machine Learning methods were used to create several models that classify students in two clusters: high and low performers. The models created from slide features w… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
3
1

Citation Types

1
24
0

Year Published

2015
2015
2021
2021

Publication Types

Select...
3
3
1

Relationship

1
6

Authors

Journals

citations
Cited by 29 publications
(25 citation statements)
references
References 25 publications
1
24
0
Order By: Relevance
“…Luzardo et al performed two-class (good or poor) classification experiments to predict quality of slides (SQ), SCE, RIGP, and AVV. For predicting SCE, AVV, and RIGP, Luzardo et al used the audio features (minimum, maximum, average and standard deviation of pitch calculated for each student presentation/video) for the two-class (good or poor) classification task, which resulted in an accuracy of 63% (SCE), 69% (AVV), and 67% (RIGP) (Luzardo et al, 2014). Chen et al proposed a different approach.…”
Section: Related Workmentioning
confidence: 99%
See 2 more Smart Citations
“…Luzardo et al performed two-class (good or poor) classification experiments to predict quality of slides (SQ), SCE, RIGP, and AVV. For predicting SCE, AVV, and RIGP, Luzardo et al used the audio features (minimum, maximum, average and standard deviation of pitch calculated for each student presentation/video) for the two-class (good or poor) classification task, which resulted in an accuracy of 63% (SCE), 69% (AVV), and 67% (RIGP) (Luzardo et al, 2014). Chen et al proposed a different approach.…”
Section: Related Workmentioning
confidence: 99%
“…The aforementioned studies (Chen et al, 2014;Luzardo et al, 2014;Ochoa et al, 2014;Haider et al, 2016a) analyzed statistics (mean, median values, etc.) of acoustic features over a presentation and showed that acoustic features can predict some of the presentation delivery skills.…”
Section: Related Workmentioning
confidence: 99%
See 1 more Smart Citation
“…This is the case of [14], who matched the extracted features of slide presentation's with human evaluation, using a classifier. The classifier used features that can be automatically extracted such as: font size, number of words, images, charts and the image entropy of each slide for measuring the contrast.…”
Section: Related Workmentioning
confidence: 99%
“…A summary of these studies can be found in Ochoa, Worsley, Chiluiza, and Luz (2014). In particular, Luzardo, Guamn, Chiluiza, Castells, and Ochoa (2014) investigated using audio cues -i.e., the speaking rate, as estimated from audio files, and some basic prosodic analysis -to judge delivery performance. Echeverría, Avendaño, Chiluiza, Vásquez, and Ochoa (2014) investigated using Kinect motion traces for me asuri ng body language performance during presentations.…”
mentioning
confidence: 99%