Perception of objects in ordinary scenes requires interpolation processes connecting visible areas across spatial gaps. Most research has focused on 2-D displays, and models have been based on 2-D, orientation-sensitive units. The authors present a view of interpolation processes as intrinsically 3-D and producing representations of contours and surfaces spanning all 3 spatial dimensions. The authors propose a theory of 3-D relatability that indicates for a given edge which orientations and positions of other edges in 3 dimensions may be connected to it, and they summarize the empirical evidence for 3-D relatability. The theory unifies and illuminates a number of fundamental issues in object formation, including the identity hypothesis in visual completion, the relations of contour and surface processes, and the separation of local and global processing. The authors suggest that 3-D interpolation and 3-D relatability have major implications for computational and neural models of object perception.
We consider perceptual learning -- experience-induced changes in the way perceivers extract information. Often neglected in scientific accounts of learning and in instruction, perceptual learning is a fundamental contributor to human expertise and is likely a crucial contributor in domains where humans show remarkable levels of attainment, such as chess, music, and mathematics. In Section II, we give a brief history and discuss the relation of perceptual learning to other forms of learning. We consider in Section III several specific phenomena, illustrating the scope and characteristics of perceptual learning, including both discovery and fluency effects. We describe abstract perceptual learning, in which structural relationships are discovered and recognized in novel instances that do not share constituent elements or basic features. In Section IV, we consider primary concepts that have been used to explain and model perceptual learning, including receptive field change, selection, and relational recoding. In Section V, we consider the scope of perceptual learning, contrasting recent research, focused on simple sensory discriminations, with earlier work that emphasized extraction of invariance from varied instances in more complex tasks. Contrary to some recent views, we argue that perceptual learning should not be confined to changes in early sensory analyzers. Phenomena at various levels, we suggest, can be unified by models that emphasize discovery and selection of relevant information. In a final section, we consider the potential role of perceptual learning in educational settings. Most instruction emphasizes facts and procedures that can be verbalized, whereas expertise depends heavily on implicit pattern recognition and selective extraction skills acquired through perceptual learning. We consider reasons why perceptual learning has not been systematically addressed in traditional instruction, and we describe recent successful efforts to create a technology of perceptual learning in areas such as aviation, mathematics, and medicine. Research in perceptual learning promises to advance scientific accounts of learning, and perceptual learning technology may offer similar promise in improving education.
Here we introduce a database of calibrated natural images publicly available through an easy-to-use web interface. Using a Nikon D70 digital SLR camera, we acquired about six-megapixel images of Okavango Delta of Botswana, a tropical savanna habitat similar to where the human eye is thought to have evolved. Some sequences of images were captured unsystematically while following a baboon troop, while others were designed to vary a single parameter such as aperture, object distance, time of day or position on the horizon. Images are available in the raw RGB format and in grayscale. Images are also available in units relevant to the physiology of human cone photoreceptors, where pixel values represent the expected number of photoisomerizations per second for cones sensitive to long (L), medium (M) and short (S) wavelengths. This database is distributed under a Creative Commons Attribution-Noncommercial Unported license to facilitate research in computer vision, psychophysics of perception, and visual neuroscience.
P. J. Kellman, P. Garrigan, & T. F. Shipley presented a theory of 3-D interpolation in object perception. Along with results from many researchers, this work supports an emerging picture of how the visual system connects separate visible fragments to form objects. In his commentary, B. L. Anderson challenges parts of that view, especially the idea of a common underlying interpolation component in modal and amodal completion (the identity hypothesis). Here the authors analyze Anderson's evidence and argue that he neither provides any reason to abandon the identity hypothesis nor offers a viable alternative theory. The authors offer demonstrations and analyses indicating that interpolated contours can appear modally despite absence of the luminance relations, occlusion geometry, and surface attachment that Anderson claims to be necessary. The authors elaborate crossing interpolations as key cases in which modal and amodal appearance must be consequences of interpolation. Finally, the authors dispute Anderson's assertion that vision researchers are misguided in using objective performance methods, and they argue that his challenges to relatability fail because contour and surface processes, as well as local and global influences, have been distinguished experimentally.
Cones with peak sensitivity to light at long (L), medium (M) and short (S) wavelengths are unequal in number on the human retina: S cones are rare (<10%) while increasing in fraction from center to periphery, and the L/M cone proportions are highly variable between individuals. What optical properties of the eye, and statistical properties of natural scenes, might drive this organization? We found that the spatial-chromatic structure of natural scenes was largely symmetric between the L, M and S sensitivity bands. Given this symmetry, short wavelength attenuation by ocular media gave L/M cones a modest signal-to-noise advantage, which was amplified, especially in the denser central retina, by long-wavelength accommodation of the lens. Meanwhile, total information represented by the cone mosaic remained relatively insensitive to L/M proportions. Thus, the observed cone array design along with a long-wavelength accommodated lens provides a selective advantage: it is maximally informative.
Object perception requires interpolation processes that connect visible regions despite spatial gaps. Some research has suggested that interpolation may be a 3-D process, but objective performance data and evidence about the conditions leading to interpolation are needed. The authors developed an objective performance paradigm for testing 3-D interpolation and tested a new theory of 3-D contour interpolation, termed 3-D relatability. The theory indicates for a given edge which orientations and positions of other edges in space may be connected to it by interpolation. Results of 5 experiments showed that processing of orientation relations in 3-D relatable displays was superior to processing in 3-D nonrelatable displays and that these effects depended on object formation. 3-D interpolation and 3-D relatabilty are discussed in terms of their implications for computational and neural models of object perception, which have typically been based on 2-D-orientation-sensitive units.
Perceptual learning refers to experience-induced improvements in the pick-up of information. Perceptual constancy describes the fact that, despite variable sensory input, perceptual representations typically correspond to stable properties of objects. Here, we show evidence of a strong link between perceptual learning and perceptual constancy: Perceptual learning depends on constancybased perceptual representations. Perceptual learning may involve changes in early sensory analyzers, but such changes may in general be constrained by categorical distinctions among the high-level perceptual representations to which they contribute. Using established relations of perceptual constancy and sensory inputs, we tested the ability to discover regularities in tasks that dissociated perceptual and sensory invariants. We found that human subjects could learn to classify based on a perceptual invariant that depended on an underlying sensory invariant but could not learn the identical sensory invariant when it did not correlate with a perceptual invariant. These results suggest that constancy-based representations, known to be important for thought and action, also guide learning and plasticity.abstract ͉ representation C lassical theories and contemporary computational accounts of sensation and perception distinguish between variables encoded in early sensory analysis and higher level representations of objects, scenes, and events. Whereas early analyzers involve relatively local responses to energy, perceptual representations most often correspond to stable properties of material objects. Object properties persist across changes in the energy reaching the senses, so that comprehending the world requires perceptual constancy-attainment of relatively constant perceptual descriptions despite variation in the sensory inputs used to compute them. A common example is constancy of size: Under a variety of conditions, an object's perceived size does not vary as the observer's viewing distance changes, even though such changes alter the projected (retinal) size. Similarly, an object's surface lightness (shade of gray) does not appear to change when an object is viewed outside in sunshine or indoors, despite changes of more than three orders of magnitude in
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
hi@scite.ai
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.