LAK21: 11th International Learning Analytics and Knowledge Conference 2021
DOI: 10.1145/3448139.3448188
|View full text |Cite
|
Sign up to set email alerts
|

SAINT+: Integrating Temporal Features for EdNet Correctness Prediction

Abstract: We propose SAINT+, a successor of SAINT which is a Transformer based knowledge tracing model that separately processes exercise information and student response information. Following the architecture of SAINT, SAINT+ has an encoder-decoder structure where the encoder applies self-attention layers to a stream of exercise embeddings, and the decoder alternately applies self-attention layers and encoder-decoder attention layers to streams of response embeddings and encoder output. Moreover, SAINT+ incorporates t… Show more

Help me understand this report
View preprint versions

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1
1

Citation Types

0
29
0

Year Published

2021
2021
2023
2023

Publication Types

Select...
4
2
1

Relationship

0
7

Authors

Journals

citations
Cited by 87 publications
(40 citation statements)
references
References 13 publications
0
29
0
Order By: Relevance
“…Multiple more recent approaches appear promising (Nakagawa et al, 2019;Ghosh et al, 2020;Cheng et al, 2020;Pandey and Srivastava, 2020;Oya and Morishima, 2021;Shin et al, 2021;Song et al, 2021), and many claim significant performance improvements, but their results still require verification via replication studies. Also, many of these newer models include slightly different inputs, such as skill and previous correctness as separate inputs , additional time related inputs (Shin et al, 2021) or leveraging both exercise and skill labels (Song et al, 2021), giving rise to the question of whether and how much the older architectures would also benefit from such input additions. We agree with this direction of exploring new inputs since DLKT models are powerful models that are likely to benefit from such additional information.…”
Section: Evaluation Results -No Silver Bulletmentioning
confidence: 99%
See 2 more Smart Citations
“…Multiple more recent approaches appear promising (Nakagawa et al, 2019;Ghosh et al, 2020;Cheng et al, 2020;Pandey and Srivastava, 2020;Oya and Morishima, 2021;Shin et al, 2021;Song et al, 2021), and many claim significant performance improvements, but their results still require verification via replication studies. Also, many of these newer models include slightly different inputs, such as skill and previous correctness as separate inputs , additional time related inputs (Shin et al, 2021) or leveraging both exercise and skill labels (Song et al, 2021), giving rise to the question of whether and how much the older architectures would also benefit from such input additions. We agree with this direction of exploring new inputs since DLKT models are powerful models that are likely to benefit from such additional information.…”
Section: Evaluation Results -No Silver Bulletmentioning
confidence: 99%
“…These include e.g. DKVMN (Zhang et al, 2017), SKVMN (Abdelrahman and Wang, 2019), DQN (Lee and Yeung, 2019), GNNKT (Nakagawa et al, 2019), SAKT (Pandey and Karypis, 2019), SAINT , SAINT+ (Shin et al, 2021), and JKT (Song et al, 2021), many of which include additional inputs and adjust the layer structure by introducing techniques from the broader machine learning domain.…”
Section: Deep Learning Models For Knowledge Tracingmentioning
confidence: 99%
See 1 more Smart Citation
“…• SAINT+ [37] , the first Transformer-based Knowledge Tracing model, is unique in that it introduces exercise information as well as student response information separately, while at the same time it embeds two temporal features, elapsed time and lag time, into the embedding of student response information.…”
Section: Evaluation Methodsmentioning
confidence: 99%
“…The elapsed time et strongly evidence a student's proficiency in knowledge and skills. 51 This time is converted to seconds and capped at 500 s. A d′-dimensional latent embedding vector for et k is computed as…”
Section: Learner Knowledge State Evolutionmentioning
confidence: 99%