Many researchers have proposed that, for the purpose of recognition, human vision parses shapes into component parts. Precisely how is not yet known. The minima rule for silhouettes (Hoffman & Richards, 1984) defines boundary points at which to parse but does not tell how to use these points to cut silhouettes and, therefore, does not tell what the parts are. In this paper, we propose the short-cut rule, which states that, other things being equal, human vision prefers to use the shortest possible cuts to parse silhouettes. We motivate this rule, and the well-known Petter's rule for modal completion, by the principle of transversality. We present five psychophysical experiments that test the short-cut rule, show that it successfully predicts part cuts that connect boundary points given by the minima rule, and show that it can also create new boundary points.
Human vision organizes object shapes in terms of parts and their spatial relationships.Converging experimental evidence suggests that parts are computed rapidly and early in visual processing. We review theories of how human vision parses shapes. In particular, we discuss the minima rule for finding part boundaries on shapes, geometric factors for creating part cuts, and a theory of part salience. We review empirical evidence that human vision parses shapes into parts, and show that parts-based representations explain various aspects of our visual cognition, including figure-ground assignment, judgments of shape similarity, memory for shapes, visual search for shapes, the perception of transparency, and the allocation of visual attention to objects. From Fragments to Objects: Segmentation and Grouping in Vision2
Current models of visual perception typically assume that human vision estimates true properties of physical objects, properties that exist even if unperceived. However, recent studies of perceptual evolution, using evolutionary games and genetic algorithms, reveal that natural selection often drives true perceptions to extinction when they compete with perceptions tuned to fitness rather than truth: Perception guides adaptive behavior; it does not estimate a preexisting physical truth. Moreover, shifting from evolutionary biology to quantum physics, there is reason to disbelieve in preexisting physical truths: Certain interpretations of quantum theory deny that dynamical properties of physical objects have definite values when unobserved. In some of these interpretations the observer is fundamental, and wave functions are compendia of subjective probabilities, not preexisting elements of physical reality. These two considerations, from evolutionary biology and quantum physics, suggest that current models of object perception require fundamental reformulation. Here we begin such a reformulation, starting with a formal model of consciousness that we call a “conscious agent.” We develop the dynamics of interacting conscious agents, and study how the perception of objects and space-time can emerge from such dynamics. We show that one particular object, the quantum free particle, has a wave function that is identical in form to the harmonic functions that characterize the asymptotic dynamics of conscious agents; particles are vibrations not of strings but of interacting conscious agents. This allows us to reinterpret physical properties such as position, momentum, and energy as properties of interacting conscious agents, rather than as preexisting physical truths. We sketch how this approach might extend to the perception of relativistic quantum objects, and to classical objects of macroscopic scale.
Studies of biological motion have identified specialized neural machinery for the perception of human actions. Our experiments examine behavioral and neural responses to novel, articulating and non-human 'biological motion'. We find that non-human actions are seen as animate, but do not convey body structure when viewed as point-lights. Non-human animations fail to engage the human STSp, and neural responses in pITG, ITS and FFA/FBA are reduced only for the point-light versions. Our results suggest that STSp is specialized for human motion and ventral temporal regions support general, dynamic shape perception. We also identify a region in ventral temporal cortex 'selective' for non-human animations, which we suggest processes novel, dynamic objects.
Perception is a product of evolution. Our perceptual systems, like our limbs and livers, have been shaped by natural selection. The effects of selection on perception can be studied using evolutionary games and genetic algorithms. To this end, we define and classify perceptual strategies and allow them to compete in evolutionary games in a variety of worlds with a variety of fitness functions. We find that veridical perceptions--strategies tuned to the true structure of the world--are routinely dominated by nonveridical strategies tuned to fitness. Veridical perceptions escape extinction only if fitness varies monotonically with truth. Thus, a perceptual strategy favored by selection is best thought of not as a window on truth but as akin to a windows interface of a PC. Just as the color and shape of an icon for a text file do not entail that the text file itself has a color or shape, so also our perceptions of space-time and objects do not entail (by the Invention of Space-Time Theorem) that objective reality has the structure of space-time and objects. An interface serves to guide useful actions, not to resemble truth. Indeed, an interface hides the truth; for someone editing a paper or photo, seeing transistors and firmware is an irrelevant hindrance. For the perceptions of H. sapiens, space-time is the desktop and physical objects are the icons. Our perceptions of space-time and objects have been shaped by natural selection to hide the truth and guide adaptive behaviors. Perception is an adaptive interface.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
hi@scite.ai
334 Leonard St
Brooklyn, NY 11211
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.