Identifying the visual cues that determine relative depth across an image contour (i.e., figure-ground organization) is a central problem of vision science. In this paper, we compare flat cues to figure-ground organization with the recently discovered cue of extremal edges (EEs), which arise when opaque convex surfaces smoothly curve to partly occlude themselves. The present results show that EEs are very powerful pictorial cues to relative depth across an edge, almost entirely dominating the well-known figure-ground cues of relative size, convexity, shape familiarity, and surroundedness. These results demonstrate that natural shading and texture gradients in an image provide important information about figure-ground organization that has largely been overlooked in the past 75 years of research on this topic.
Extremal edges (EEs) are projections of viewpoint-specific horizons of self-occlusion on smooth convex surfaces. An ecological analysis of viewpoint constraints suggests that an EE surface is likely to be closer to the observer than the non-EE surface on the other side of the edge. In two experiments, one using shading gradients and the other using texture gradients, we demonstrated that EEs operate as strong cues to relative depth perception and figure-ground organization. Image regions with an EE along the shared border were overwhelmingly perceived as closer than either flat or equally convex surfaces without an EE along that border. A further demonstration suggests that EEs are more powerful than classical figure-ground cues, including even the joint effects of small size, convexity, and surroundedness.
Latent fingerprint examination is a complex task that, despite advances in image processing, still fundamentally depends on the visual judgments of highly trained human examiners. Fingerprints collected from crime scenes typically contain less information than fingerprints collected under controlled conditions. Specifically, they are often noisy and distorted and may contain only a portion of the total fingerprint area. Expertise in fingerprint comparison, like other forms of perceptual expertise, such as face recognition or aircraft identification, depends on perceptual learning processes that lead to the discovery of features and relations that matter in comparing prints. Relatively little is known about the perceptual processes involved in making comparisons, and even less is known about what characteristics of fingerprint pairs make particular comparisons easy or difficult. We measured expert examiner performance and judgments of difficulty and confidence on a new fingerprint database. We developed a number of quantitative measures of image characteristics and used multiple regression techniques to discover objective predictors of error as well as perceived difficulty and confidence. A number of useful predictors emerged, and these included variables related to image quality metrics, such as intensity and contrast information, as well as measures of information quantity, such as the total fingerprint area. Also included were configural features that fingerprint experts have noted, such as the presence and clarity of global features and fingerprint ridges. Within the constraints of the overall low error rates of experts, a regression model incorporating the derived predictors demonstrated reasonable success in predicting objective difficulty for print pairs, as shown both in goodness of fit measures to the original data set and in a cross validation test. The results indicate the plausibility of using objective image metrics to predict expert performance and subjective assessment of difficulty in fingerprint comparisons.
We investigate how pressure-sensitive smart textiles, in the form of a headband, can detect changes in facial expressions that are indicative of emotions and cognitive activities. Specifically, we present the Expressure system that performs surface pressure mechanomyography on the forehead using an array of textile pressure sensors that is not dependent on specific placement or attachment to the skin. Our approach is evaluated in systematic psychological experiments. First, through a mimicking expression experiment with 20 participants, we demonstrate the system’s ability to detect well-defined facial expressions. We achieved accuracies of 0.824 to classify among three eyebrow movements (0.333 chance-level) and 0.381 among seven full-face expressions (0.143 chance-level). A second experiment was conducted with 20 participants to induce cognitive loads with N-back tasks. Statistical analysis has shown significant correlations between the Expressure features on a fine time granularity and the cognitive activity. The results have also shown significant correlations between the Expressure features and the N-back score. From the 10 most facially expressive participants, our approach can predict whether the N-back score is above or below the average with 0.767 accuracy.
A recent paper examined eye dominance with the eyes in forward and eccentric gaze [Vision Res. 41 (2001) 1743]. When observers were looking to the left, the left eye tended to dominate and when they were looking to the right, the right eye tended to dominate. The authors attributed the switch in eye dominance to extra-retinal signals associated with horizontal eye position. However, when one looks at a near object on the left, the image in the left eye is larger than the one in the right eye, and when one looks to the right, the opposite occurs. Thus, relative image size could also trigger switches in eye dominance. We used a cue-conflict paradigm to determine whether eye position or relative image size is the determinant of eye-dominance switches with changes in gaze angle. When eye position and relative image size were varied independently, there was no consistent effect of eye position. Relative image size appears to be the sole determinant of the switch.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
hi@scite.ai
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.