2021
DOI: 10.1016/j.ins.2020.08.017
|View full text |Cite
|
Sign up to set email alerts
|

Fine-grained learning performance prediction via adaptive sparse self-attention networks

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

0
10
0

Year Published

2021
2021
2024
2024

Publication Types

Select...
6
2
1

Relationship

0
9

Authors

Journals

citations
Cited by 26 publications
(10 citation statements)
references
References 22 publications
0
10
0
Order By: Relevance
“…Fifty-four (87.70%) studies employed single intelligent models for predicting the attainment of learning outcomes. Remarkably, only eight studies (i.e., [60,65,66,80,84,93,96,101]) explored the use of hybrid intelligent models to improve the accuracy of academic perfor-mance predictions. Hybrid or ensemble classifiers involve the integration of heterogeneous learning techniques to boost the predictive performance [106].…”
Section: Predictive Models Of Learning Outcomesmentioning
confidence: 99%
See 1 more Smart Citation
“…Fifty-four (87.70%) studies employed single intelligent models for predicting the attainment of learning outcomes. Remarkably, only eight studies (i.e., [60,65,66,80,84,93,96,101]) explored the use of hybrid intelligent models to improve the accuracy of academic perfor-mance predictions. Hybrid or ensemble classifiers involve the integration of heterogeneous learning techniques to boost the predictive performance [106].…”
Section: Predictive Models Of Learning Outcomesmentioning
confidence: 99%
“…However, 18 studies were inconclusive about the strength of their findings. Only 5 (8.06%) studies reported using multiple datasets to verify the performance of their predictive models to check the consistency and validity of the learning outcomes predictions [65,80,86,96,102]. The remaining studies (91.93%) used only one dataset.…”
Section: Dominant Factors Predicting Student Learning Outcomesmentioning
confidence: 99%
“…Pandey and Karypis [28] proposed a self-attentive model for knowledge tracing (SAKT), which directly applied the transformer to capture long-term dependencies between students' learning interaction with few modifications, and achieved fairly good performance. Moreover, Wang et al [54] proposed an adaptive sparse self-attention network to generate the missing features and simultaneously obtained fine-grained prediction of student performance. Zhu et al [55] found there was a vibration problem in DKT and presented an attention-based knowledge tracing model to solve it, which also further used Finite State Automaton (FSA) to provide deep analysis about knowledge state transition.…”
Section: Attentive Knowledge Tracingmentioning
confidence: 99%
“…It is a two-classifier used to estimate the probability that a sample comes from training data. GAN has the potential to generate "infinite" new samples in a distributed manner and has great application value in the fields of artificial intelligence, such as image, visual computing, and voice processing [22,23]. GAN provides a new direction for unsupervised learning and provides methods and ideas for processing high-dimensional data and complex probability distributions.…”
Section: Related Workmentioning
confidence: 99%