2013
DOI: 10.1007/978-3-642-38844-6_18
|View full text |Cite
|
Sign up to set email alerts
|

Comparing and Combining Eye Gaze and Interface Actions for Determining User Learning with an Interactive Simulation

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

0
9
0

Year Published

2013
2013
2021
2021

Publication Types

Select...
5
2
2

Relationship

3
6

Authors

Journals

citations
Cited by 28 publications
(9 citation statements)
references
References 18 publications
0
9
0
Order By: Relevance
“…Finally, action and eye-data can be combined to build real-time classifiers that leverage both sources for predicting student learning as the interaction proceeds and intervening if learning is predicted to be low. Kardan and Conati (2013) discuss one such classifier for an interactive simulation for university students. We aim to investigate if and how this approach generalizes to more game-like environments, as exemplified by Prime Climb.…”
Section: Discussionmentioning
confidence: 99%
“…Finally, action and eye-data can be combined to build real-time classifiers that leverage both sources for predicting student learning as the interaction proceeds and intervening if learning is predicted to be low. Kardan and Conati (2013) discuss one such classifier for an interactive simulation for university students. We aim to investigate if and how this approach generalizes to more game-like environments, as exemplified by Prime Climb.…”
Section: Discussionmentioning
confidence: 99%
“…We have shown that from the outset, our gaze-based classifiers outperform a simple baseline, and that even after observing only a few seconds of gaze data, we can make predictions with up to 60% accuracy. While these accuracies may not be high enough yet for driving a user-adaptive system, it is worth noting again that in this paper we have solely used gaze data, and that the combination of this data with complementary input features such as interaction data, mouse-tracking data, or other user characteristics is likely to improve prediction accuracies (as shown in [15]). …”
Section: Discussion and Future Workmentioning
confidence: 95%
“…In Kardan and Conati's (2012) study, data was captured over the whole session; thus, to achieve a high accuracy, the classifier needed complete session data, otherwise the accuracy of decisions would not be high enough, specifically in the early stages of learning. Kardan and Conati (2013) show that the actions from logs plus eye data improve the classifier's accuracy to 85% by considering 22% of all interaction data.…”
Section: Eye Tracking and Itssmentioning
confidence: 96%