In recent years, wide deployment of automatic face recognition systems has been accompanied by substantial gains in algorithm performance. However, benchmarking tests designed to evaluate these systems do not account for the errors of human operators, who are often an integral part of face recognition solutions in forensic and security settings. This causes a mismatch between evaluation tests and operational accuracy. We address this by measuring user performance in a face recognition system used to screen passport applications for identity fraud. Experiment 1 measured target detection accuracy in algorithm-generated ‘candidate lists’ selected from a large database of passport images. Accuracy was notably poorer than in previous studies of unfamiliar face matching: participants made over 50% errors for adult target faces, and over 60% when matching images of children. Experiment 2 then compared performance of student participants to trained passport officers–who use the system in their daily work–and found equivalent performance in these groups. Encouragingly, a group of highly trained and experienced “facial examiners” outperformed these groups by 20 percentage points. We conclude that human performance curtails accuracy of face recognition systems–potentially reducing benchmark estimates by 50% in operational settings. Mere practise does not attenuate these limits, but superior performance of trained examiners suggests that recruitment and selection of human operators, in combination with effective training and mentorship, can improve the operational accuracy of face recognition systems.
Research on the visual perception of materials has mostly focused on the surface qualities of rigid objects. The perception of substance like materials is less explored. Here, we investigated the contribution of, and interaction between, surface optics and mechanical properties to the perception of nonrigid, breaking materials. We created novel animations of materials ranging from soft to hard bodies that broke apart differently when dropped. In Experiment 1, animations were rendered as point-light movies varying in dot density, as well as "full-cue" optical versions ranging from translucent glossy to opaque matte under a natural illumination field. Observers used a scale to rate each substance on different attributes. In Experiment 2 we investigated how much shape contributed to ratings of the full-cue stimuli in Experiment 1, by comparing ratings when observers were shown movies versus one frame of the animation. The results showed that optical and mechanical properties had an interactive effect on ratings of several material attributes. We also found that motion and static cues each provided a lot of information about the material qualities; however, when combined, they influenced observers' ratings interactively. For example, in some conditions, motion dominated over optical information; in other conditions, it enhanced the effect of optics. Our results suggest that rating multiple attributes is an effective way to measure underlying perceptual differences between nonrigid breaking materials, and this study is the first to our knowledge to show interactions between optical and mechanical properties in a task involving judgments of perceptual qualities.
Many objects that we encounter have typical material qualities: spoons are hard, pillows are soft, and Jell-O dessert is wobbly. Over a lifetime of experiences, strong associations between an object and its typical material properties may be formed, and these associations not only include how glossy, rough, or pink an object is, but also how it behaves under force: we expect knocked over vases to shatter, popped bike tires to deflate, and gooey grilled cheese to hang between two slices of bread when pulled apart. Here we ask how such rich visual priors affect the visual perception of material qualities and present a particularly striking example of expectation violation. In a cue conflict design, we pair computer-rendered familiar objects with surprising material behaviors (a linen curtain shattering, a porcelain teacup wrinkling, etc.) and find that material qualities are not solely estimated from the object's kinematics (i.e., its physical [atypical] motion while shattering, wrinkling, wobbling etc.); rather, material appearance is sometimes “pulled” toward the “native” motion, shape, and optical properties that are associated with this object. Our results, in addition to patterns we find in response time data, suggest that visual priors about materials can set up high-level expectations about complex future states of an object and show how these priors modulate material appearance.
There is a growing body of work investigating the visual perception of material properties like gloss, yet practically nothing is known about how the brain recognises different material classes like plastic, pearl, satin, and steel, nor the precise relationship between material properties like gloss and perceived material class. We report a series of experiments that show that parametrically changing reflectance parameters leads to qualitative changes in material appearance beyond those expected by the reflectance function used. We measure visual (image) features that predict these changes in appearance, and causally manipulate these features to confirm their role in perceptual categorisation. Furthermore, our results suggest that the same visual features underlie both material recognition and surface gloss perception. However, the predictiveness of each feature to perceived gloss changes with material category, suggesting that the pockets of feature space occupied by different material classes affect the processing of those very features when estimating surface glossiness. Our results do not support a traditional feedforward view that assumes that material perception proceeds from low-level image measurements, to mid-level estimates of surface properties, to high-level material classes, nor the idea that material properties like gloss and material class are simultaneously "read out" from visual gloss features. Instead, we suggest that the perception and neural processing of material properties like surface gloss should be considered in the context of material recognition. Chadwick & Kentridge, 2015;Fleming 2014Fleming , 2017), yet practically nothing is known about how the brain recognises different material classes like plastic, pearl, satin, and steel. A likely reason for this is that studying properties like colour and gloss seems more tractable than discovering the necessary and sufficient conditions for recognising the many material classes in our environment. 2011;For example, previous research has discovered a limited set of conditions that trigger the perception of a glossy versus matte surface, involving the intensity, shape, position, and orientation of specular highlights (bright reflections;
A series of experiments were conducted to assess how the reflectance properties and the complexity of surface "mesostructure" (small-scale 3-D relief) influence perceived lightness. Experiment 1 evaluated the role of surface relief and gloss on perceived lightness. For surfaces with visible mesostructure, lightness constancy was better for targets embedded in glossy than matte surfaces. The results for surfaces that lacked surface relief were qualitatively different than the 3-D surrounds, exhibiting abrupt steps in perceived lightness at points at which the targets transition from being increments to decrements. Experiments 2 and 4 compared the matte and glossy 3-D surrounds to two control displays, which matched either pixel histograms or a phase-scrambled power spectrum, respectively. Although some improved lightness constancy was observed for the 3-D gloss display over the histogram-matched display, this benefit was not observed for phase-scrambled variants of these images with equated power spectrums. These results suggest that the improved lightness constancy observed with 3-D surfaces can be well explained by the distribution of contrast across space and scale, independently of explicit information about surface shading or specularity whereas the putatively "simpler" flat displays may evoke more complex midlevel representations similar to that evoked in conditions of transparency.
Visually categorizing and comparing materials is crucial for our everyday behaviour. Given the dramatic variability in their visual appearance and functional significance, what organizational principles underly the internal representation of materials? To address this question, here we use a large-scale data-driven approach to uncover the core latent dimensions in our mental representation of materials. In a first step, we assembled a new image dataset (STUFF dataset) consisting of 600 photographs of 200 systematically sampled material classes. Next, we used these images to crowdsource 1.87 million triplet similarity judgments. Based on the responses, we then modelled the assumed cognitive process underlying these choices by quantifying each image as a sparse, non-negative vector in a multidimensional embedding space. The resulting embedding predicted material similarity judgments in an independent test set close to the human noise ceiling and accurately reconstructed the similarity matrix of all 600 images in the STUFF dataset. We found that representations of individual material images were captured by a combination of 36 material dimensions that were highly reproducible and interpretable, comprising perceptual (e.g., “grainy”, “blue”) as well as conceptual (e.g., “mineral”, “viscous”) dimensions. These results have broad implications for understanding material perception, its natural dimensions, and our ability to organize materials into classes.
Lightness judgments of targets embedded in a homogeneous surround exhibit abrupt steps in perceived lightness at points at which the targets transition from being increments to decrements. This "crispening effect" and the general difficulty of matching low-contrast targets embedded in homogeneous surrounds suggest that a second perceptual dimension in addition to lightness may contribute to the appearance of test patches in these displays. The present study explicitly tested whether two dimensions (lightness and transmittance) could lead to more satisfactory matches than lightness alone in an asymmetric matching task. We also examined whether transmittance matches were more strongly associated with task instructions that had observers match perceived transparency or the perceived edge contrast of the target relative to the surround. We found that matching target lightness in a homogeneous display to that in a textured or rocky display required varying both lightness and transmittance of the test patch on the textured display to obtain the most satisfactory matches. However, observers primarily varied transmittance when instructed to match the perceived contrast of targets against homogeneous surrounds, but not when instructed to match the amount of transparency perceived in the displays. The results suggest that perceived target-surround edge contrast differs between homogeneous and textured displays. Varying the midlevel property of transparency in textured displays provides a natural means for equating both target lightness and the unique appearance of the edge contrast in homogeneous displays.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
hi@scite.ai
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.