The platform will undergo maintenance on Sep 14 at about 7:45 AM EST and will be unavailable for approximately 2 hours.
2018
DOI: 10.1007/s11257-018-9203-z
|View full text |Cite
|
Sign up to set email alerts
|

Student success prediction in MOOCs

Abstract: Predictive models of student success in Massive Open Online Courses (MOOCs) are a critical component of effective content personalization and adaptive interventions. In this article we review the state of the art in predictive models of student success in MOOCs and present a categorization of MOOC research according to the predictors (features), prediction (outcomes), and underlying theoretical model. We critically survey work across each category, providing data on the raw data source, feature engineering, st… Show more

Help me understand this report
View preprint versions

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1
1

Citation Types

1
97
0
4

Year Published

2018
2018
2023
2023

Publication Types

Select...
3
3
2

Relationship

2
6

Authors

Journals

citations
Cited by 149 publications
(119 citation statements)
references
References 103 publications
1
97
0
4
Order By: Relevance
“…transfer with the baselines (Label-Truth, Label-Truth-AE, Naive Transfer, In-Situ Learning, and Instance-Based Transfer), for the similar pairs of source and target 6.00.1x→6.00.1x (within offerings of one course) are shown in Table 4 and Figure 8a, and the dissimilar pairs 6.00.2x → 6.00.1x (across two courses) in Table 5 and Figure 8b. Note that the performance of In-Situ Learning and 8. It shows that Naive Transfer overfits to the source domain from week 5.…”
Section: Transfer Learning Resultsmentioning
confidence: 94%
See 1 more Smart Citation
“…transfer with the baselines (Label-Truth, Label-Truth-AE, Naive Transfer, In-Situ Learning, and Instance-Based Transfer), for the similar pairs of source and target 6.00.1x→6.00.1x (within offerings of one course) are shown in Table 4 and Figure 8a, and the dissimilar pairs 6.00.2x → 6.00.1x (across two courses) in Table 5 and Figure 8b. Note that the performance of In-Situ Learning and 8. It shows that Naive Transfer overfits to the source domain from week 5.…”
Section: Transfer Learning Resultsmentioning
confidence: 94%
“…Feature identification is a critical precursor to prediction [8]. Some human selected and engineered features are page views, video interactions, forum posts, and content interactions.…”
Section: Related Workmentioning
confidence: 99%
“…A key area of research has been methods for feature engineering, or extracting structured information from raw data (i.e. clickstream server logs, natural language in discussion posts) [8].…”
Section: A Educational Big Data In the Mooc Eramentioning
confidence: 99%
“…In the survey presented above and in other work [21], we have described the common practices of predictive modeling experiments in learning analytics. These include (a) a massive space of potential models due to many data sources, feature types, and algorithms used; (b) relatively small collections of datasets, for example, even the largest prior MOOC studies of which we are aware evaluate around 40 MOOCs (i.e., [47,17]) and (c) large individual datasets, which make repeated model-fitting undesirable, if not intractable.…”
Section: The Case For Bayesian Model Evaluationmentioning
confidence: 99%
“…We consider the following models in our experiment: (1) classical decision trees (CART) [7], (2) L2 (or "ridge") regularized logistic regression (L2LR); (3) gradient boosted tree (Adaboost) [12], used as a a stand-in for the widely used [21] random forest method 5 ; (4) support vector machine (SVM) with linear kernel; (5) naïve Bayes (NB). These represent five of the most commonly used modeling algorithm in predictive models of student success in MOOCs [21]. A summary of the models considered, and any special preprocessing, is shown in Table 4.…”
Section: Algorithms and Hyperparametersmentioning
confidence: 99%