“…Pardos and Heffernan [17] introduced the KT-IDEM model which extends the basic BKT model to account for item difficulty. The model fits separate "guess" and "slip" parameters for each item in a skill, and the question node is conditioned on the item node in the network topology.…”
Despite the prevalence of e-learning systems in schools, most of today's systems do not personalize educational data to the individual needs of each student. This paper proposes a new algorithm for sequencing questions to students that is empirically shown to lead to better performance and engagement in real schools when compared to a baseline approach. It is based on using knowledge tracing to model students' skill acquisition over time, and to select questions that advance the student's learning within the range of the student's capabilities, as determined by the model. The algorithm is based on a Bayesian Knowledge Tracing (BKT) model that incorporates partial credit scores, reasoning about multiple attempts to solve problems, and integrating item difficulty. This model is shown to outperform other BKT models that do not reason about (or reason about some but not all) of these features. The model was incorporated into a sequencing algorithm and deployed in two classes in different schools where it was compared to a baseline sequencing algorithm that was designed by pedagogical experts. In both classes, students using the BKT sequencing approach solved more difficult questions and attributed higher performance than did students who used the expert-based approach. Students were also more engaged using the BKT approach, as determined by their interaction time and number of log-ins to the system, as well as their reported opinion. We expect our approach to inform the design of better methods for sequencing and personalizing educational content to students that will meet their individual learning needs.
“…Pardos and Heffernan [17] introduced the KT-IDEM model which extends the basic BKT model to account for item difficulty. The model fits separate "guess" and "slip" parameters for each item in a skill, and the question node is conditioned on the item node in the network topology.…”
Despite the prevalence of e-learning systems in schools, most of today's systems do not personalize educational data to the individual needs of each student. This paper proposes a new algorithm for sequencing questions to students that is empirically shown to lead to better performance and engagement in real schools when compared to a baseline approach. It is based on using knowledge tracing to model students' skill acquisition over time, and to select questions that advance the student's learning within the range of the student's capabilities, as determined by the model. The algorithm is based on a Bayesian Knowledge Tracing (BKT) model that incorporates partial credit scores, reasoning about multiple attempts to solve problems, and integrating item difficulty. This model is shown to outperform other BKT models that do not reason about (or reason about some but not all) of these features. The model was incorporated into a sequencing algorithm and deployed in two classes in different schools where it was compared to a baseline sequencing algorithm that was designed by pedagogical experts. In both classes, students using the BKT sequencing approach solved more difficult questions and attributed higher performance than did students who used the expert-based approach. Students were also more engaged using the BKT approach, as determined by their interaction time and number of log-ins to the system, as well as their reported opinion. We expect our approach to inform the design of better methods for sequencing and personalizing educational content to students that will meet their individual learning needs.
“…Scaffolding problems, a feedback style within the ASSISTments platform typically used to break a problem down into steps or to provide worked examples, were excluded from the final dataset. The decision to work with main problems was based in part on the justification made by Pardos & Heffernan [9] when using a similar dataset from the ASSISTments platform. As scaffolding problems are guided, they offer a less accurate view of skill knowledge and skew performance data within an opportunity based analysis.…”
Section: Datasetmentioning
confidence: 99%
“…Expansion in the field educational data mining has since lead to a number of alternative or supplementary learning models. For instance, researchers have attempted to impart individualized prior knowledge nodes for each student [8], to supplement KT with a flexible metric for item difficulty [9], to ensemble various methods of binning student performance (i.e., partial credit) with standard KT models [12], and to consider the sequence of a student's actions within the tutor to help predict next problem correctness [3].…”
Student modeling within intelligent tutoring systems is a task largely driven by binary models that predict student knowledge or next problem correctness (i.e., Knowledge Tracing (KT)). However, using a binary construct for student assessment often causes researchers to overlook the feedback innate to these platforms. The present study considers a novel method of tabling an algorithmically determined partial credit score and problem difficulty bin for each student's current problem to predict both binary and partial next problem correctness. This study was conducted using log files from ASSISTments, an adaptive mathematics tutor, from the 2012-2013 school year. The dataset consisted of 338,297 problem logs linked to 15,253 unique student identification numbers. Findings suggest that an efficiently tabled model considering partial credit and problem difficulty performs about as well as KT on binary predictions of next problem correctness. This method provides the groundwork for modifying KT in an attempt to optimize student modeling.
“…It uses correct and incorrect responses in students' problem-solving attempts to infer the probability of a student knowing the skill underlying the problem-solving step at hand. This method has been used to investigate learning differences between conditions during the acquisition phase (Pardos et al 2011).…”
Section: Bayesian Knowledge Tracing: Differences During the Acquisitimentioning
confidence: 99%
“…To do so, we adapted modeling techniques from prior work that evaluated the learning value of different forms of tutoring in (non-experiment) log data of an intelligent tutor (Pardos et al 2010). Furthermore, we use techniques from KT-IDEM (Pardos and Heffernan 2011) to model different guess and slips for problems depending on the representation used in the tutor problem. This procedure allows us to estimate four different learning rates per task type, each corresponding to the particular condition (i.e., blocked practice, fully interleaved, moderately interleaved, or increasingly interleaved) assigned to the student-as opposed to using a single learning rate per task type, independent of condition.…”
Providing learners with multiple representations of learning content has been shown to enhance learning outcomes. When multiple representations are presented across consecutive problems, we have to decide in what sequence to present them. Prior research has demonstrated that interleaving tasks types (as opposed to blocking them) can foster learning. Do the same advantages apply to interleaving representations? We addressed this question using a variety of research methods. First, we conducted a classroom experiment with an intelligent tutoring system for fractions. We compared four practice schedules of multiple graphical representations: blocked, fully interleaved, moderately interleaved, and increasingly interleaved. Based on data from 230 4th and 5th-grade students, we found that interleaved practice leads to better learning outcomes than blocked practice on a number of measures. Second, we conducted a think-aloud study to gain insights into the learning mechanisms underlying the advantage of interleaved practice. Results show that students make connections between representations only when explicitly prompted to do so (and not spontaneously). This finding suggests that reactivation, rather than abstraction, is the main mechanism to account for the advantage of interleaved practice. Third, we used methods derived from Bayesian knowledge tracing to analyze tutor log data from the classroom experiment. Modeling latent measures of students' learning rates, we find higher learning rates for interleaved practice than for blocked practice. This finding extends prior research on practice schedules, which shows that interleaved practice (compared to blocked practice) impairs students' problem-solving performance during the practice phase when using raw performance measures such as error rates. Our findings have implications for the design of multi-representational learning materials and for research on adaptive practice schedules in intelligent tutoring systems.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.