Summary
Although certain characteristics of human faces are broadly considered more attractive (e.g. symmetry, averageness), people also routinely disagree with each other on the relative attractiveness of faces. That is, to some significant degree, beauty is in the “eye of the beholder”. Here, we investigate the origins of these individual differences in face preferences using a twin design, allowing us to estimate the relative contributions of genetic and environmental variation to individual face attractiveness judgments or face preferences. We first show that individual face preferences (IP) can be reliably measured and are readily dissociable from other types of attractiveness judgments (e.g. judgments of scenes, objects). Next we show that individual face preferences result primarily from environments that are unique to each individual. This is in striking contrast to individual differences in face identity recognition, which result primarily from variations in genes [1]. We thus complete an etiological double dissociation between two core domains of social perception (judgments of identity versus attractiveness) within the same visual stimulus (the face). At the same time, we provide an example, rare in behavioral genetics, of a reliably and objectively measured behavioral characteristic where variations are shaped mostly by the environment. The large impact of experience on individual face preferences provides a novel window into the evolution and architecture of the social brain, while lending new empirical support to the long-standing claim that environments shape individual notions of what is attractive.
Scientific fields that are interested in faces have developed their own sets of concepts and procedures for understanding how a target model system (be it a person or algorithm) perceives a face under varying conditions. In computer vision, this has largely been in the form of dataset evaluation for recognition tasks where summary statistics are used to measure progress. While aggregate performance has continued to improve, understanding individual causes of failure has been difficult, as it is not always clear why a particular face fails to be recognized, or why an impostor is recognized by an algorithm. Importantly, other fields studying vision have addressed this via the use of visual psychophysics: the controlled manipulation of stimuli and careful study of the responses they evoke in a model system. In this paper, we suggest that visual psychophysics is a viable methodology for making face recognition algorithms more explainable. A comprehensive set of procedures is developed for assessing face recognition algorithm behavior, which is then deployed over state-of-the-art convolutional neural networks and more basic, yet still widely used, shallow and handcrafted feature-based approaches.
By providing substantial amounts of data and standardized evaluation protocols, datasets in computer vision have helped fuel advances across all areas of visual recognition. But even in light of breakthrough results on recent benchmarks, it is still fair to ask if our recognition algorithms are doing as well as we think they are. The vision sciences at large make use of a very different evaluation regime known as Visual Psychophysics to study visual perception. Psychophysics is the quantitative examination of the relationships between controlled stimuli and the behavioral responses they elicit in experimental test subjects. Instead of using summary statistics to gauge performance, psychophysics directs us to construct item-response curves made up of individual stimulus responses to find perceptual thresholds, thus allowing one to identify the exact point at which a subject can no longer reliably recognize the stimulus class. In this article, we introduce a comprehensive evaluation framework for visual recognition models that is underpinned by this methodology. Over millions of procedurally rendered 3D scenes and 2D images, we compare the performance of well-known convolutional neural networks. Our results bring into question recent claims of human-like performance, and provide a path forward for correcting newly surfaced algorithmic deficiencies.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.