2020
DOI: 10.1007/978-3-030-52240-7_46
|View full text |Cite
|
Sign up to set email alerts
|

Deep Knowledge Tracing with Transformers

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

0
16
0

Year Published

2021
2021
2023
2023

Publication Types

Select...
5
2
1

Relationship

0
8

Authors

Journals

citations
Cited by 30 publications
(16 citation statements)
references
References 4 publications
0
16
0
Order By: Relevance
“…Selected approach is GKT [22]. • Attention based models: dependence between interactions is captured by the attention mechanism and its variants [10,24,26,47]. Selected approaches are AKT [10], SAKT [23] and SAINT [7].…”
Section: Representative Dlkt Methodsmentioning
confidence: 99%
See 1 more Smart Citation
“…Selected approach is GKT [22]. • Attention based models: dependence between interactions is captured by the attention mechanism and its variants [10,24,26,47]. Selected approaches are AKT [10], SAKT [23] and SAINT [7].…”
Section: Representative Dlkt Methodsmentioning
confidence: 99%
“…Similar to all existing DLKT research works [1,4,10,11,14,15,20,21,22,24,25,26,29,34,37,44,45,46,47], we use the area under the receiver operating characteristics curve (AUC) as the main metric to evaluate the performance of DLKT models on predicting binary-valued future learner responses to either questions or KCs. Meanwhile, we also include detailed model performance in terms of accuracy in Appendix.…”
Section: Evaluation Metricmentioning
confidence: 99%
“…First, we acknowledge that there are a wide variety of knowledge tracing models, including newer ones, that were not included in the empirical evaluation. For example, Pu et al (Pu et al, 2020) and Choi et. al have proposed Transformer-based DLKT models of which the latter has been further extended by Shin et al (Shin et al, 2021), Pandey and Srivastava (Pandey and Srivastava, 2020) have also proposed a Transformer DLKT model (RKT) with the addition of including contextrelated information such as the textual content of the questions as input, Liu et al proposed an exercise enhanced Bi-Directional LSTM with attention model called EKT which leverages both exercise and concept (skill) information, Ghosh et al (Ghosh et al, 2020) developed an attention based DLKT model that fuses DLKT with IRT, Nakagawa et al (Nakagawa et al, 2019) proposed a graph based DLKT model and Song et al (Song et al, 2021) further proposed another graph based DLKT model which, like EKT, models both exercise and skill relations, Yudelson has examined Elo-rating based models for estimating student knowledge (Yudelson, 2019), and Ghosh et al have developed option tracing where the exact choice a student makes is predicted instead of just answer correctness (Ghosh et al, 2021).…”
Section: Limitations Of Work -This Work Is Not Perfect Eithermentioning
confidence: 99%
“…Knowledge tracing (KT) is a student modeling task where knowledge or skill is estimated based on a trace of interactions with learning activities, such as course exercises. While new knowledge tracing models and modifications of existing ones are presented regularly (Yudelson et al, 2013;Piech et al, 2015;Zhang et al, 2017;Yeung and Yeung, 2018;Abdelrahman and Wang, 2019;Nakagawa et al, 2019;Pu et al, 2020), it is not always clear what the actual factors that contribute to the performance of the proposed models are (Wilson et al, 2016;Lipton and Steinhardt, 2018). To what extent do the results depend on the proposed model algorithms and the used data, or are the optimizations the actual key to the results?…”
Section: Introductionmentioning
confidence: 99%
“…e dynamic key-value memory network (DKVMN) uses a static memory called key and a dynamic memory called value to discover latent relations between exercises and knowledge concepts [10,11]. Self-attentive knowledge tracing (SAKT) proposes a self-attention-based KT model to model the students' knowledge state, with exercises as attention queries and students' past interactions as attention keys/values [3,[12][13][14][15].…”
Section: Introductionmentioning
confidence: 99%