In the course of running an eye-tracking experiment, one computer system or subsystem typically presents the stimuli to the participant and records manual responses, and another collects the eye movement data, with little interaction between the two during the course of the experiment. This article demonstrates how the two systems can interact with each other to facilitate a richer set of experimental designs and applications and to produce more accurate eye tracking data. In an eye-tracking study, a participant is periodically instructed to look at specific screen locations, or explicit required fixation locations (RFLs), in order to calibrate the eye tracker to the participant. The design of an experimental procedure will also often produce a number of implicit RFLs-screen locations that the participant must look at within a certain window of time or at a certain moment in order to successfully and correctly accomplish a task, but without explicit instructions to fixate those locations. In these windows of time or at these moments, the disparity between the fixations recorded by the eye tracker and the screen locations corresponding to implicit RFLs can be examined, and the results of the comparison can be used for a variety of purposes. This article shows how the disparity can be used to monitor the deterioration in the accuracy of the eye tracker calibration and to automatically invoke a recalibration procedure when necessary.This article also demonstrates how the disparity will vary across screen regions and participants and how each participant's unique error signature can be used to reduce the systematic error in the eye movement data collected for that participant.
To understand how people search for a known target item in an unordered pull-down menu, this research presents cognitive models that vary serial versus parallel processing of menu items, random versus systematic search, and different numbers of menu items fitting into the fovea simultaneously.Varying these conditions, models were constructed and run using the EPIC cognitive architecture. The selection times predicted by the models are compared with selection times of human subjects performing the same menu task. Comparing the predicted and observed times, the models reveal that 1) people process more than one menu item at a time, and 2) people search menus using both random and systematic search strategies.
This article investigates the cognitive strategies that people use to search computer displays. Several different visual layouts are examined: unlabeled layouts that contain multiple groups of items but no group headings, labeled layouts in which items are grouped and each group has a useful heading, and a target-only layout that contains just one item. A number of plausible strategies were proposed for each layout. Each strategy was programmed into the EPIC cognitive architecture, producing models that simulate the human visual-perceptual, oculomotor, and cognitive processing required for the task. The models generate search time predictions. For unlabeled layouts, the mean layout search times are predicted by a purely random search strategy, and the more detailed positional search times are predicted by a noisy systematic strategy. The labeled layout search times are predicted by a hierarchical strategy in which first the group labels are systematically searched, and then the contents of the target group. The target-only layout search times are predicted by a strategy in which the eyes move directly to the sudden appearance of the target. The models demonstrate that human visual search performance can be explained largely in terms of the cognitive strategy
The seeming contradiction between “banner blindness” and Web users' complaints about distracting advertisements motivates a pair of experiments into the effect of banner ads on visual search. Experiment 1 measures perceived cognitive workload and search times for short words with two banners on the screen. Four kinds of banners were examined: (1) animated commercial, (2) static commercial, (3) cyan with flashing text, and (4) blank. Using NASA's Task Load Index, participants report increased workload under flashing text banners. Experiment 2 investigates search through news headlines at two levels of difficulty: exact matches and matches requiring semantic interpretation. Results show both animated and static commercial banners decrease visual search speeds. Eye tracking data reveal people rarely look directly at banners. A post hoc memory test confirms low banner recall and, surprisingly, that animated banners are more difficult to remember than static look-alikes. Results have implications for cognitive modeling and Web design.
Children with severe motor impairments such as with disabilities resulting from severe cerebral palsy benefit greatly from assistive technology, but very little guidance is available on how to collaborate with this population as partners in the design of such technology. To explore how to facilitate such collaborations, a field-based participant observation study, as well as structured and unstructured interviews, were conducted at a home for children with severe disabilities.Team-building collaborative design activities were pursued. Guidelines are proposed for how to collaborate with children with severe motor impairments.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
hi@scite.ai
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.