Proceedings of the Seventh International Learning Analytics &Amp; Knowledge Conference 2017
DOI: 10.1145/3027385.3027387
|View full text |Cite
|
Sign up to set email alerts
|

Predicting the decrease of engagement indicators in a MOOC

Abstract: Predicting the decrease of students' engagement in typical MOOC tasks such as watching lecture videos or submitting assignments is key to trigger timely interventions in order to try to avoid the disengagement before it takes place. This paper proposes an approach to build the necessary predictive models using students' data that becomes available during a course. The approach was employed in an experimental study to predict the decrease of three different engagement indicators in a MOOC. The results suggest i… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1
1

Citation Types

1
39
0
1

Year Published

2018
2018
2021
2021

Publication Types

Select...
4
3
2

Relationship

0
9

Authors

Journals

citations
Cited by 55 publications
(43 citation statements)
references
References 14 publications
1
39
0
1
Order By: Relevance
“…The level of sophistication of the activity-based features in the works surveyed varies substantially, ranging from simple counting-based features (e.g. Kloft et al, 2014;Xing et al, 2016) to more complex features, including temporal indicators of increase/decrease Chen and Zhang, 2017;Bote-Lorenzo and Gómez-Sánchez, 2017), sequences (Balakrishnan and Coetzee, 2013; Fei and Yeung, 2015), and latent variable models (Sinha et al, 2014a;Ramesh et al, 2013Ramesh et al, , 2014Qiu et al, 2016). Despite this variation, each of these typically uses the same underlying data source (clickstream, or a relational database consisting of extracted time-stamped clickstream events) and draws from a relatively small and consistent set of base features, including: page viewing, or visiting various course pages, such as video lecture viewing pages, assignment pages, or course progress pages;…”
Section: Activity-based Modelsmentioning
confidence: 99%
“…The level of sophistication of the activity-based features in the works surveyed varies substantially, ranging from simple counting-based features (e.g. Kloft et al, 2014;Xing et al, 2016) to more complex features, including temporal indicators of increase/decrease Chen and Zhang, 2017;Bote-Lorenzo and Gómez-Sánchez, 2017), sequences (Balakrishnan and Coetzee, 2013; Fei and Yeung, 2015), and latent variable models (Sinha et al, 2014a;Ramesh et al, 2013Ramesh et al, , 2014Qiu et al, 2016). Despite this variation, each of these typically uses the same underlying data source (clickstream, or a relational database consisting of extracted time-stamped clickstream events) and draws from a relatively small and consistent set of base features, including: page viewing, or visiting various course pages, such as video lecture viewing pages, assignment pages, or course progress pages;…”
Section: Activity-based Modelsmentioning
confidence: 99%
“…These samples represent the hypothetical differences in performance on a future unseen dataset [9]; generating N = 50, 000 samples on a typical laptop computer takes only a few seconds, and conducting this comparison for all 4560 pairwise comparisons in the experiment below takes less than 10 minutes using the BayesianTestsML Python library. 3 The MCMC samples are used to estimate θ by simply counting the proportion of samples for which θ i has the highest posterior probability. The results of this sampling can be visualized by projecting the (θ X>Y , θ ROP E , θ X<Y ) triplets onto barycentric coordinates to produce a posterior plot, shown in Figure 3.…”
Section: Bayesian Model Evaluationmentioning
confidence: 99%
“…average grade) as well as more complex features (e.g. number of submissions relative to the highest number of submissions by any student that week) [3,30,43]. Where courses used no assignments, models using this method defaulted to majority-class prediction.…”
Section: Feature Extraction/data Sourcementioning
confidence: 99%
“…To date, MOOC research has addressed a diverse array of research questions from psychometrics and social psychology to predictive modeling and machine learn-ing. For example, several works have explored prediction of various student outcomes using behavioral, linguistic, and assignment data from MOOCs to evaluate and predict various student outcomes including course completion [1], [2], [3], assignment grades [4], Correct on First Attempt (CFA) submissions [5], student confusion [6], and changes in behavior over time [7]. A key area of research has been methods for feature engineering, or extracting structured information from raw data (i.e.…”
Section: A Educational Big Data In the Mooc Eramentioning
confidence: 99%