2015
DOI: 10.1609/aaai.v29i1.9485
|View full text |Cite
|
Sign up to set email alerts
|

Constructing Models of User and Task Characteristics from Eye Gaze Data for User-Adaptive Information Highlighting

Abstract: A user-adaptive information visualization system capable of learning models of users and the visualization tasks they perform could provide interventions optimized for helping specific users in specific task contexts. In this paper, we investigate the accuracy of predicting visualization tasks, user performance on tasks, and user traits from gaze data. We show that predictions made with a logistic regression model are significantly better than a baseline classifier, with particularly strong results for predict… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

0
5
0

Year Published

2015
2015
2023
2023

Publication Types

Select...
6
1

Relationship

1
6

Authors

Journals

citations
Cited by 11 publications
(5 citation statements)
references
References 23 publications
0
5
0
Order By: Relevance
“…The work by Gingerich and Conati 27 demonstrates how decisions made during the development of models by using personalized visualizations to highlight opportunities were essential in the decision-making of the best modeling strategy. De Baets and Harvey 28 conducted a controlled experiment to compare the models developed by two groups.…”
Section: Related Workmentioning
confidence: 99%
“…The work by Gingerich and Conati 27 demonstrates how decisions made during the development of models by using personalized visualizations to highlight opportunities were essential in the decision-making of the best modeling strategy. De Baets and Harvey 28 conducted a controlled experiment to compare the models developed by two groups.…”
Section: Related Workmentioning
confidence: 99%
“…As an alternative, user characteristics could be predicted in real-time by the system, leveraging implicit information on the user's interaction behavior. Initial evidence that such real-time prediction is possible by leveraging eye-tracking data has been provided for perceptual speed, spatial memory, vis literacy and visual WM during interaction with MQ-transit [12], as well as for perceptual speed and visual WM while processing bar and radar charts [24]. These works have also shown that prediction is possible early on during visualization processing, typically after observing 10% to 50% of a user's data.…”
Section: Implications For Personalizationmentioning
confidence: 99%
“…To investigate the impact of user/task characteristics on gaze behavior (Objective 2), we ran similar mixed models where the dependent variables were a variety of summative statistics on gaze measures, e.g., rate of gaze fixations, average fixation length, percentage of gaze transitions between salient areas of the visualization known as Areas of Interest, or AOI (Toker et al 2013. Finally, to investigate whether gaze data can help build user models that predict relevant user/task characteristics during visualization processing (Objective 3), we conducted machine learning experiments that leveraged different feature sets based on gaze data to predict user performance and cognitive/personality traits, as well as task type and difficulty (Steichen et al 2013, Gingerich and Conati 2015. We have completed all types of analysis for both the Bar/Radar study (Toker et al 2012, Toker et al 2013) and the Intervention study (Carenini et al 2014, Gingerich and Conati 2015, whereas for the VC study we have so far performed only the analysis related to Objective 1 (Conati et al 2014).…”
Section: User Studiesmentioning
confidence: 99%
“…Finally, to investigate whether gaze data can help build user models that predict relevant user/task characteristics during visualization processing (Objective 3), we conducted machine learning experiments that leveraged different feature sets based on gaze data to predict user performance and cognitive/personality traits, as well as task type and difficulty (Steichen et al 2013, Gingerich and Conati 2015. We have completed all types of analysis for both the Bar/Radar study (Toker et al 2012, Toker et al 2013) and the Intervention study (Carenini et al 2014, Gingerich and Conati 2015, whereas for the VC study we have so far performed only the analysis related to Objective 1 (Conati et al 2014).…”
Section: User Studiesmentioning
confidence: 99%
See 1 more Smart Citation