2012
DOI: 10.1007/978-3-642-31454-4_11
|View full text |Cite
|
Sign up to set email alerts
|

Exploring Gaze Data for Determining User Learning with an Interactive Simulation

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1
1

Citation Types

3
29
0

Year Published

2014
2014
2018
2018

Publication Types

Select...
5
2
1

Relationship

4
4

Authors

Journals

citations
Cited by 46 publications
(32 citation statements)
references
References 16 publications
3
29
0
Order By: Relevance
“…Interestingly, LR consistently achieved the highest accuracies compared to other machine learning models, such as SVM or Decision Trees. Although we do not have an intuitive explanation for this finding, several other works have similarly found LR to perform well with eye gaze data [Kardan and Conati 2012;Bondareva et al 2013].…”
Section: Summary Of Results and Discussionsupporting
confidence: 67%
See 1 more Smart Citation
“…Interestingly, LR consistently achieved the highest accuracies compared to other machine learning models, such as SVM or Decision Trees. Although we do not have an intuitive explanation for this finding, several other works have similarly found LR to perform well with eye gaze data [Kardan and Conati 2012;Bondareva et al 2013].…”
Section: Summary Of Results and Discussionsupporting
confidence: 67%
“…In particular, these analyses have typically consisted of offline processes that require further human analysis (e.g., manually analyzing eye gaze coordinate plots [Iqbal and Bailey 2004]). In terms of actually using raw eye-tracking data for real-time prediction, most research has so far focused on identifying the user's cognitive processes while she is performing nonvisualization activities, such as during exploratory e-learning [Kardan and Conati 2012;Conati and Merten 2007], quizzes [Courtemanche et al 2011], simple puzzle games [Eivazi and Bednarik 2011], or information search tasks (e.g., word search) [Simola et al 2008]. By contrast, our gaze-based work focuses on information visualization, where a user's main activity is to perform simple visualization lookup and comparison tasks.…”
Section: Related Workmentioning
confidence: 99%
“…Next, association rule mining is applied to each cluster to extract its common behavior patterns, i.e., rules in form of X c, where X is a set of feature-value pairs and c is the predicted cluster for the data points to which X applies (Table 1 shows samples of these rules, which will be further explained in the next section). We built the user model for the adaptive CSP applet by applying Behavior Discovery to a dataset of 110 users obtained from two previous studies on this simulation [12,13]. From this dataset, the Behavior Discovery generated two clusters of users that achieved significantly different learning levels, labeled as High Learning Gain (HLG) and Low Learning Gain (LLG) groups from now on.…”
Section: Modeling Student Learning In the Csp Appletmentioning
confidence: 99%
“…For example, Qu & Johnson [21] showed that user gaze behaviors can help predict users' motivation during interaction with an intelligent tutoring system. Kardan et al [14] and Bondareva et al [4] showed that eye tracking data can be used to predict student learning with two different educational environments, and that this prediction can be performed early enough to possibly provide adaptive interventions that can foster learning. D'Mello et al [8] evaluated an intelligent tutoring system that both detected and reacted to students' lack of attention based on gaze patterns.…”
Section: Eye-tracking In User Modeling For Adaptive Systemsmentioning
confidence: 98%
“…Following the approach in [14] IUI 2014 • Learning and Skills February 24-27, 2014, Haifa, Israel and [24], we generated a large set of eye-tracking features by calculating statistics upon basic eye-tracking measures (see Table 1 & Table 2). Distance between the two fixations delimiting the saccade (d in Figure 4) Relative Saccade Angles…”
Section: Eye Tracking Measures and Featuresmentioning
confidence: 99%