In an effort towards standardization, this paper evaluates the performance of five eye movement classification algorithms in terms of their assessment of oculomotor fixation and saccadic behavior. The results indicate that performance of these five commonly used algorithms vary dramatically even in the case of a simple stimulus evoked task using a single, common threshold value. The important contributions of this paper are: 1) evaluation and comparison of performance of five algorithms to classify specific oculomotor behavior 2) introduction and comparison of new standardized scores to provide more reliable classification performance 3) logic for a reasonable threshold value selection for any eye movement classification algorithm based on the standardized scores and 4) logic for establishing a criterion-based baseline for performance comparison between any eye movement classification algorithms. Proposed techniques enable efficient and objective clinical applications providing means to assure meaningful automated eye movement classification.
Ternary eye movement classification, which separates fixations, saccades, and smooth pursuit from the raw eye positional data, is extremely challenging. This article develops new and modifies existing eye-tracking algorithms for the purpose of conducting meaningful ternary classification. To this end, a set of qualitative and quantitative behavior scores is introduced to facilitate the assessment of classification performance and to provide means for automated threshold selection. Experimental evaluation of the proposed methods is conducted using eye movement records obtained from 11 subjects at 1000 Hz in response to a step-ramp stimulus eliciting fixations, saccades, and smooth pursuits. Results indicate that a simple hybrid method that incorporates velocity and dispersion thresholding allows producing robust classification performance. It is concluded that behavior scores are able to aid automated threshold selection for the algorithms capable of successful classification.
Event detection is a challenging stage in eye movement data analysis. A major drawback of current event detection methods is that parameters have to be adjusted based on eye movement data quality. Here we show that a fully automated classification of raw gaze samples as belonging to fixations, saccades, or other oculomotor events can be achieved using a machine-learning approach. Any already manually or algorithmically detected events can be used to train a classifier to produce similar classification of other data without the need for a user to set parameters. In this study, we explore the application of random forest machine-learning technique for the detection of fixations, saccades, and post-saccadic oscillations (PSOs). In an effort to show practical utility of the proposed method to the applications that employ eye movement classification algorithms, we provide an example where the method is employed in an eye movement-driven biometric application. We conclude that machine-learning techniques lead to superior detection compared to current state-of-the-art event detection algorithms and can reach the performance of manual coding.
A common aspect of individuality is our subjective preferences in evaluation of reward and effort. The neural circuits that evaluate these commodities influence circuits that control our movements, raising the possibility that vigor differences between individuals may also be a trait of individuality, reflecting a willingness to expend effort. In contrast, classic theories in motor control suggest that vigor differences reflect a speed-accuracy trade-off, predicting that those who move fast are sacrificing accuracy for speed. Here we tested these contrasting hypotheses. We measured motion of the eyes, head, and arm in healthy humans during various elementary movements (saccades, head-free gaze shifts, and reaching). For each person we characterized their vigor, i.e., the speed with which they moved a body part (peak velocity) with respect to the population mean. Some moved with low vigor, while others moved with high vigor. Those with high vigor tended to react sooner to a visual stimulus, moving both their eyes and arm with a shorter reaction time. Arm and head vigor were tightly linked: individuals who moved their head with high vigor also moved their arm with high vigor. However, eye vigor did not correspond strongly with arm or head vigor. In all modalities, vigor had no impact on end-point accuracy, demonstrating that differences in vigor were not due to a speed-accuracy trade-off. Our results suggest that movement vigor may be a trait of individuality, not reflecting a willingness to accept inaccuracy but demonstrating a propensity to expend effort. NEW & NOTEWORTHY A common aspect of individuality is how we evaluate economic variables like reward and effort. This valuation affects not only decision making but also motor control, raising the possibility that vigor may be distinct between individuals but conserved across movements within an individual. Here we report conservation of vigor across elementary skeletal movements, but not eye movements, raising the possibility that the individuality of our movements may be driven by a common neural mechanism of effort evaluation across modalities of skeletal motor control.
This paper presents an objective evaluation of various eye movement-based biometric features and their ability to accurately and precisely distinguish unique individuals. Eye movements are uniquely counterfeit resistant due to the complex neurological interactions and the extraocular muscle properties involved in their generation. Considered biometric candidates cover a number of basic eye move ments and their aggregated scan path characteristics, in cluding: fixation count, average fixation duration, average saccade amplitudes, average saccade velocities, average saccade peak velocities, the velocity waveform, scanpath length, scanpath area, regions of interest, scan path inflec tions, the amplitude-duration relationship, the main se quence relationship, and the pairwise distance between fIXations. As well, an information fusion method for com bining these metrics into a single identification algorithm is presented. With limited testing this method was able to identifY subjects with an equal error rate of 27%. These results indicate that scan path-based biometric identifica tion holds promise as a behavioral biometric technique.
The goal of this paper is to predict future horizontal eye movement trajectories within a specified time interval. To achieve this goal a linear horizontal oculomotor plant mechanical model is developed. The model consists of the eye globe and two extraocular muscles: lateral and medial recti. The model accounts for such anatomical properties of the eye as muscle location, elasticity, viscosity, eyeglobe rotational inertia, muscle active state tension, length tension and force velocity relationships. The mathematical equations describing the oculomotor plant mechanical model are transformed into a Kalman filter form. Such transformation provides continuous eye movement prediction with a high degree of accuracy. The model was tested with 21 subjects and three multimedia files. Practical application of this model lies with direct eye gaze input and interactive displays systems as a method to compensate for detection, transmission and processing delays.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
hi@scite.ai
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.