No abstract
We describe an open-source software framework that simulates the measurements made using one or several cameras in a videooculographic eye tracker. The framework can be used to compare objectively the performance of different eye tracking setups (number and placement of cameras and light sources) and gaze estimation algorithms. We demonstrate the utility of the framework by using it to compare two remote eye tracking methods, one using a single camera, the other using two cameras.
Abstract-This paper presents a very simple feature-based nose detector in combined range and amplitude data obtained by a 3D time-of-flight camera. The robust localization of image attributes, such as the nose, can be used for accurate object tracking. We use geometric features that are related to the intrinsic dimensionality of surfaces. To find a nose in the image, the features are computed per pixel; pixels whose feature values lie inside a certain bounding box in feature space are classified as nose pixels, and all other pixels are classified as non-nose pixels. The extent of the bounding box is learned on a labeled training set. Despite its simplicity this procedure generalizes well, that is, a bounding box determined for one group of subjects accurately detects noses of other subjects. The performance of the detector is demonstrated by robustly identifying the nose of a person in a wide range of head orientations. An important result is that the combination of both range and amplitude data dramatically improves the accuracy in comparison to the use of a single type of data. This is reflected in the equal error rates (EER) obtained on a database of head poses. Using only the range data, we detect noses with an EER of 0.66. Results on the amplitude data are slightly better with an EER of 0.42. The combination of both types of data yields a substantially improved EER of 0.03.
Larry Stark has emphasised that what we visually perceive is very much determined by the scanpath, i.e. the pattern of eye movements. 1 Inspired by his view, we have studied the implications of the scanpath for visual communication and came up with the idea to not only sense and analyse eye movements, but also guide them by using a special kind of gazecontingent information display. Our goal is to integrate gaze into visual communication systems by measuring and guiding eye movements. For guidance, we first predict a set of about 10 salient locations. We then change the probability for one of these candidates to be attended: for one candidate the probability is increased, for the others it is decreased. To increase saliency, for example, we add red dots that are displayed very briefly such that they are hardly perceived consciously. To decrease the probability, for example, we locally reduce the temporal frequency content. Again, if performed in a gazecontingent fashion with low latencies, these manipulations remain unnoticed. Overall, the goal is to find the real-time video transformation minimising the difference between the actual and the desired scanpath without being obtrusive. Applications are in the area of vision-based communication (better control of what information is conveyed) and augmented vision and learning (guide a person's gaze by the gaze of an expert or a computer-vision system). We believe that our research is very much in the spirit of Larry Stark's views on visual perception and the close link between vision research and engineering.
The field of quantitative analysis and subsequent mapping of the cerebral cortex has developed rapidly. New powerful tools have been applied to investigate large regions of complex folded gyrencephalic cortices in order to detect structural transition regions that might partition different cortical fields of disjunct neuronal functions. We have developed a new mapping approach based on axoarchitectonics, a method of cortical visualization that previously has been used only indirectly with regard to myeloarchitectonics. Myeloarchitectonic visualization has the disadvantage of producing strong agglomerative effects of closely neighbored nerve fibers. Therefore, single and neurofunctional-relevant parameters such as axonal branchings, axon areas, and axon numbers have not been determinable with satisfying precision. As a result, different staining techniques had to be explored in order to achieve a suitable histologic staining for axon visualization. The best results were obtained after modifying the Naoumenko-Feigin staining for axons. From these contrast-rich stained histologic sections, videomicroscopic digital image tiles were generated and analyzed using a new fiber analysis framework. Finally, the analysis of histologic images provided topologic ordered parameters of axons that were transferred into parameter maps. The axon parameter maps were analyzed further via a recently developed traverse generating algorithm that calculated test lines oriented perpendicular to the cortical surface and white matter border. The gray value coded parameters of the parameter maps were then transferred into profile arrays. These profile arrays were statistically analyzed by a reliable excess mass approach we recently developed. We found that specific axonal parameters are preferentially distributed throughout granular and agranular types of cortex. Furthermore, our new procedure detected transition regions originally defined by changes of cytoarchitectonic layering. Statistically significant inhomogeneities of the distribution of certain axon quantities were shown to indicate a subparcellation of areas 4 and 6. The quantification techniques established here for the analysis of spatial axon distributions within larger regions of the cerebral cortex are suitable to detect inhomogeneities of laminar axon patterns. Hence, these techniques can be recommended for systematic and observer-supported cortical area mapping and parcellation studies.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
hi@scite.ai
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.