Eye gaze movements are considered as a salient modality for human computer interaction applications. Recently, cross-ratio (CR) based eye tracking methods have attracted increasing interest because they provide remote gaze estimation using a single uncalibrated camera. However, due to the simplification assumptions in CR-based methods, their performance is lower than the model-based approaches [8]. Several efforts have been made to improve the accuracy by compensating for the assumptions with subjectspecific calibration. This paper presents a CR-based automatic gaze estimation system that accurately works under natural head movements. A subject-specific calibration method based on regularized least-squares regression (LSR) is introduced for achieving higher accuracy compared to other state-of-the-art calibration methods. Experimental results also show that the proposed calibration method generalizes better when fewer calibration points are used. This enables user friendly applications with minimum calibration effort without sacrificing too much accuracy. In addition, we adaptively fuse the estimation of the point of regard (PoR) from both eyes based on the visibility of eye features. The adaptive fusion scheme reduces accuracy error by around 20% and also increases the estimation coverage under natural head movements.
Abstract-Eye movements play a very significant role in human computer interaction (HCI) as they are natural and fast, and contain important cues for human cognitive state and visual attention. Over the last two decades, many techniques have been proposed to accurately estimate the gaze. Among these, video-based remote eye trackers have attracted much interest since they enable non-intrusive gaze estimation. To achieve high estimation accuracies for remote systems, user calibration is inevitable in order to compensate for the estimation bias caused by person-specific eye parameters. Although several explicit and implicit user calibration methods have been proposed to ease the calibration burden, the procedure is still cumbersome and needs further improvement. In this paper, we present a comprehensive analysis of regression-based user calibration techniques. We propose a novel weighted least squares regression-based user calibration method together with a real-time cross-ratio based gaze estimation framework. The proposed system enables to obtain high estimation accuracy with minimum user effort which leads to user-friendly HCI applications. Experimental results conducted on both simulations and user experiments show that our framework achieves a significant performance improvement over the state-of-the-art user calibration methods when only a few points are available for the calibration.
Abstract. Cancer diagnosis and personalized cancer treatment are heavily based on the visual assessment of immunohistochemically-stained tissue specimens. The precision of this assessment depends critically on the quality of immunostaining, which is governed by a number of parameters used in the staining process. Tuning of the staining-process parameters is mostly based on pathologists' qualitative assessment, which incurs interand intra-observer variability. The lack of standardization in staining across pathology labs leads to poor reproducibility and consequently to uncertainty in diagnosis and treatment selection. In this paper, we propose a methodology to address this issue through a quantitative evaluation of the staining quality by using visual computing and machine learning techniques on immunohistochemically-stained tissue images. This enables a statistical analysis of the sensitivity of the staining quality to the process parameters and thereby provides an optimal operating range for obtaining high-quality immunostains. We evaluate the proposed methodology on HER2-stained breast cancer tissues and demonstrate its use to define guidelines to optimize and standardize immunostaining.
Abstract-Gaze movements play a crucial role in humancomputer interaction (HCI) applications. Recently, gaze tracking systems with a wide variety of applications have attracted much interest by the industry as well as the scientific community. The state-of-the-art gaze trackers are mostly non-intrusive and report high estimation accuracies. However, they require complex setups such as camera and geometric calibration in addition to subject-specific calibration. In this paper, we introduce a multi-camera gaze estimation system which requires less effort for the users in terms of the system setup and calibration. The system is based on an adaptive fusion of multiple independent camera systems in which the gaze estimation relies on simple cross-ratio (CR) geometry. Experimental results conducted on real data show that the proposed system achieves a significant accuracy improvement, by around 25%, over the traditional CR-based single camera systems through the novel adaptive multi-camera fusion scheme. The real-time system achieves <0.9• accuracy error with very few calibration data (5 points) under natural head movements, which is competitive with more complex systems. Hence, the proposed system enables fast and user-friendly gaze tracking with minimum user effort without sacrificing too much accuracy.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
hi@scite.ai
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.