In the course of running an eye-tracking experiment, one computer system or subsystem typically presents the stimuli to the participant and records manual responses, and another collects the eye movement data, with little interaction between the two during the course of the experiment. This article demonstrates how the two systems can interact with each other to facilitate a richer set of experimental designs and applications and to produce more accurate eye tracking data. In an eye-tracking study, a participant is periodically instructed to look at specific screen locations, or explicit required fixation locations (RFLs), in order to calibrate the eye tracker to the participant. The design of an experimental procedure will also often produce a number of implicit RFLs-screen locations that the participant must look at within a certain window of time or at a certain moment in order to successfully and correctly accomplish a task, but without explicit instructions to fixate those locations. In these windows of time or at these moments, the disparity between the fixations recorded by the eye tracker and the screen locations corresponding to implicit RFLs can be examined, and the results of the comparison can be used for a variety of purposes. This article shows how the disparity can be used to monitor the deterioration in the accuracy of the eye tracker calibration and to automatically invoke a recalibration procedure when necessary.This article also demonstrates how the disparity will vary across screen regions and participants and how each participant's unique error signature can be used to reduce the systematic error in the eye movement data collected for that participant.
This research investigates the cognitive strategies and eye movements that people use to search for a known item in a hierarchical computer display. Computational cognitive models were built to simulate the visual-perceptual and oculomotor processing required to search hierarchical and nonhierarchical displays. Eye movement data were collected and compared on over a dozen measures with the a priori predictions of the models. Though it is well accepted that hierarchical layouts are easier to search than nonhierarchical layouts, the underlying cognitive basis for this design heuristic has not yet been established. This work combines cognitive modeling and eye tracking to explain this and numerous other visual design guidelines. This research also demonstrates the power of cognitive modeling for predicting, explaining, and interpreting eye movement data, and how to use eye tracking data to confirm and disconfirm modeling details.
Visual search is an important part of human-computer interaction. It is critical that we build theory about how people visually search displays in order to better support the users' visual capabilities and limitations in everyday tasks. One way of building such theory is through computational cognitive modeling. The ultimate promise for cognitive modeling in HCI it to provide the science base needed for predictive interface analysis tools. This paper discusses computational cognitive modeling of the perceptual, strategic, and oculomotor processes people used in a visual search task. This work refines and rounds out previously reported cognitive modeling and eye tracking analysis. A revised "minimal model" of visual search is presented that explains a variety of eye movement data better than the original model. The revised model uses a parsimonious strategy that is not tied to a particular visual structure or feature beyond the location of objects. Three characteristics of the minimal strategy are discussed in detail.
Human visual search plays an important role in many human-computer interaction (HCI) tasks. Better models of visual search are needed not just to predict overall performance outcomes, such as whether people will be able to find the information needed to complete an HCI task, but to understand the many human processes that interact in visual search, which will in turn inform the detailed design of better user interfaces. This article describes a detailed instantiation, in the form of a computational cognitive model, of a comprehensive theory of human visual processing known as ''active vision'' (Findlay & Gilchrist, 2003). The computational model is built using the Executive Process-Interactive Control cognitive architecture. Eye-tracking data from three experiments inform the development and validation of the model. The modeling asks-and at least partially answers-the four questions of active vision: (a) What can be perceived in a fixation? (b) When do the eyes move? (c) Where do the eyes move? (d) What information is integrated between eye movements? Answers include: (a) Items nearer the point of gaze are more likely to be perceived, and the visual features of objects are sometimes misidentified. (b) The eyes move after the fixated visual stimulus has been processed (i.e., has entered working memory). (c) The eyes tend to go to nearby objects. (d) Only the coarse spatial information of what has been fixated is likely maintained between fixations. The model developed to answer these questions has both scientific and practical value in that the model gives Tim Halverson is a cognitive scientist with an interest in human-computer interaction, cognitive modeling, eye movements, and fatigue; he is a Research Computer Scientist in the Applied Neuroscience Branch of the Air Force Research Laboratory. Anthony Hornof is a computer scientist with an interest in human-computer interaction, cognitive modeling, visual search, and eye tracking; he is an Associate Professor in the
This research investigates the cognitive strategies and eye movements that people use to search for a known item in a hierarchical computer display. Computational cognitive models were built to simulate the visual-perceptual and oculomotor processing required to search hierarchical and nonhierarchical displays. Eye movement data were collected and compared on over a dozen measures with the a priori predictions of the models. Though it is well accepted that hierarchical layouts are easier to search than nonhierarchical layouts, the underlying cognitive basis for this design heuristic has not yet been established. This work combines cognitive modeling and eye tracking to explain this and numerous other visual design guidelines. This research also demonstrates the power of cognitive modeling for predicting, explaining, and interpreting eye movement data, and how to use eye tracking data to confirm and disconfirm modeling details.
Visual search in an important aspect of many tasks, but it not well understood how layout design affects visual search. This research uses reaction time data, eye movement data, and computational cognitive modeling to investigate the effect of local density on the visual search of structured layouts of words. Layouts were all-sparse, all-dense, or mixed. Participants found targets in sparse groups faster, and searched sparse groups before dense groups. Participants made slightly more fixations per word in sparse groups, but these were much shorter fixations. The modeling suggests that participants may have attempted to process words within a consistent visual angle regardless of density, but that they were more likely to miss the target if the target was in a dense group. Furthermore, it was found that the participants tended to search sparse groups before dense groups. When combining densities in a layout, it may be beneficial to place important information in sparse groups.
Human-computer systems intended for time-critical multitasking need to be designed with an understanding of how humans can coordinate and interleave perceptual, memory, and motor processes. This paper presents human performance data for a highly-practiced time-critical dual task. In the first of the two interleaved tasks, participants tracked a target with a joystick. In the second, participants keyed-in responses to objects moving across a radar display.Task manipulations include the peripheral visibility of the secondary display (visible or not) and the presence or absence of auditory cues to assist with the radar task. Eye movement analyses reveal extensive coordination and overlapping of human information processes and the extent to which task manipulations helped or hindered dual task performance. For example, auditory cues helped only a little when the secondary display was peripherally visible, but they helped a lot when it was not peripherally visible.
Study Objectives A cognitive throughput task known as the Digit Symbol Substitution Test (DSST) (or Symbol Digit Modalities Test) has been used as an assay of general cognitive slowing during sleep deprivation. Here, the effects of total sleep deprivation (TSD) on specific cognitive processes involved in DSST performance, including visual search, spatial memory, paired-associate learning, and motor response, were investigated through targeted task manipulations. Methods A total of 12 DSST variants, designed to manipulate the use of specific cognitive processes, were implemented in two laboratory-based TSD studies with N = 59 and N = 26 subjects, respectively. In each study, the Psychomotor Vigilance Test (PVT) was administered alongside the DSST variants. Results TSD reduced cognitive throughput on all DSST variants, with response time distributions exhibiting rightward skewing. All DSST variants showed practice effects, which were however minimized by inclusion of a pause between trials. Importantly, TSD-induced impairment on the DSST variants was not uniform, with a principal component analysis revealing three factors. Diffusion model decomposition of cognitive processes revealed that inter-individual differences during TSD on a two-alternative forced choice DSST variant were different from those on the PVT. Conclusions While reduced cognitive throughput has been interpreted to reflect general cognitive slowing, such TSD-induced impairment appears to reflect cognitive instability, like on the PVT, rather than general slowing. Further, comparisons between task variants revealed not one, but three distinct underlying processes impacted by sleep deprivation. Moreover, the practice effect on the task was found to be independent of the TSD effect and minimized by a task pacing manipulation.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
hi@scite.ai
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.