To be published in Behavioral and Brain Sciences (in press) Cambridge University Press 2007 Below is the unedited précis of a book that is being accorded BBS multiple book review. This preprint has been prepared for potential commentators who wish to nominate themselves for formal commentary invitation. Please do not write a commentary unless you receive a formal invitation. Invited commentators will receive full instructions. Commentary must be based on the book.*
Human adults can go beyond the limits of individual sensory systems' resolutions by integrating multiple estimates (e.g., vision and touch) to reduce uncertainty. Little is known about how this ability develops. Although some multisensory abilities are present from early infancy, it is not until age ≥8 y that children use multiple modalities to reduce sensory uncertainty. Here we show that uncertainty reduction by sensory integration does not emerge until 12 y even within the single modality of vision, in judgments of surface slant based on stereoscopic and texture information. However, adults' integration of sensory information comes at a cost of losing access to the individual estimates that feed into the integrated percept ("sensory fusion"). By contrast, 6-y-olds do not experience fusion, but are able to keep stereo and texture information separate. This ability enables them to outperform adults when discriminating stimuli in which these information sources conflict. Further, unlike adults, 6-y-olds show speed gains consistent with following the fastest-available single cue. Therefore, whereas the mature visual system is optimized for reducing sensory uncertainty, the developing visual system may be optimized for speed and for detecting sensory conflicts. Such conflicts could provide the error signals needed to learn the relationships between sensory information sources and to recalibrate them while the body is growing.H uman adults can reduce sensory uncertainty by integrating estimates, both across modalities (1, 2) (e.g., integrating vision and touch to judge size) and within a modality (3) (e.g., integrating visual stereoscopic and texture information to judge surface slant). Given independent estimates with uncorrelated Gaussian noise, the optimal reduction in uncertainty (variance) is obtained by a weighted averaging of estimates, in which each estimate is weighted in proportion to its relative reliability (1/variance) (4, 5). Whereas human adults can achieve the optimal level of variance reduction in sensory tasks (1, 2), the developmental time course of this ability is unclear. Although some multisensory abilities are present from early infancy (6-8), in recent studies, children did not integrate information across modalities for shape discrimination, spatial localization, or detection of visual-auditory events until age ≥8 y (9-11). In adults, sensory integration can lead to mandatory "fusion," in which the ability to judge the individual component estimates is lost (12, 13). Sensory fusion is especially strong for information within a single modality (12). One hypothesis for late development of integration is that keeping information separate is adaptive in allowing senses to be calibrated against each other while the body is growing (10,14). To test whether children do keep sensory information sources separate, we tracked the development of sensory integration and fusion within the single modality of vision. ResultsExperiment 1: Cue Integration. The gradient of change in element size and density i...
Individuals of all ages extract structure from the sequences of patterns they encounter in their environment, an ability that is at the very heart of cognition. Exactly what underlies this ability has been the subject of much debate over the years. A novel mechanism, implicit chunk recognition (ICR), is proposed for sequence segmentation and chunk extraction. The mechanism relies on the recognition of previously encountered subsequences (chunks) in the input rather than on the prediction of upcoming items in the input sequence. A connectionist autoassociator model of ICR, truncated recursive autoassociative chunk extractor (TRACX), is presented in which chunks are extracted by means of truncated recursion. The performance and robustness of the model is demonstrated in a series of 9 simulations of empirical data, covering a wide range of phenomena from the infant statistical learning and adult implicit learning literatures, as well as 2 simulations demonstrating the model's ability to generalize to new input and to develop internal representations whose structure reflects that of the items in the input sequence. TRACX outperforms PARSER (Perruchet & Vintner, 1998) and the simple recurrent network (SRN, Cleeremans & McClelland, 1991) in matching human sequence segmentation on existing data. A new study is presented exploring 8-month-olds' use of backward transitional probabilities to segment auditory sequences.
Two experiments investigated infants' ability to localize tactile sensations in peripersonal space. Infants aged 10 months (Experiment 1) and 6.5 months (Experiment 2) were presented with vibrotactile stimuli unpredictably to either hand while they adopted either a crossed-or uncrossed-hands posture. At 6.5 months, infants' responses were predominantly manual, whereas at 10 months, visual orienting behavior was more evident. Analyses of the direction of the responses indicated that (a) both age groups were able to locate tactile stimuli, (b) the ability to remap visual and manual responses to tactile stimuli across postural changes develops between 6.5 and 10 months of age, and (c) the 6.5-month-olds were biased to respond manually in the direction appropriate to the more familiar uncrossed-hands posture across both postures. The authors argue that there is an early visual influence on tactile spatial perception and suggest that the ability to remap visual and manual directional responses across changes in posture develops between 6.5 and 10 months, most likely because of the experience of crossing the midline gained during this period.
Neuroconstructivism is a theoretical framework focusing on the construction of representations in the developing brain. Cognitive development is explained as emerging from the experience-dependent development of neural structures supporting mental representations. Neural development occurs in the context of multiple interacting constraints acting on different levels, from the individual cell to the external environment of the developing child. Cognitive development can thus be understood as a trajectory originating from the constraints on the underlying neural structures. This perspective offers an integrated view of normal and abnormal development as well as of development and adult processing, and it stands apart from traditional cognitive approaches in taking seriously the constraints on cognition inherent to the substrate that delivers it.
Disentangling bottom-up and top-down processing in adult category learning is notoriously difficult. Studying category learning in infancy provides a simple way of exploring category learning while minimizing the contribution of top-down information. Three- to 4-month-old infants presented with cat or dog images will form a perceptual category representation for cat that excludes dogs and for dog that includes cats. The authors argue that an inclusion relationship in the distribution of features in the images explains the asymmetry. Using computational modeling and behavioral testing, the authors show that the asymmetry can be reversed or removed by using stimulus images that reverse or remove the inclusion relationship. The findings suggest that categorization of nonhuman animal images by young infants is essentially a bottom-up process.
Young infants show unexplained asymmetries in the exclusivity of categories formed on the basis of visually presented stimuli. A connectionist model is described that shows similar exclusivity asymmetries when categorizing the same stimuli presented to infants. The asymmetries can be explained in terms of an associative learning mechanism, distributed internal representations, and the statistics of the feature distributions in the stimuli. The model was used to explore the robustness of this asymmetry. The model predicts that the asymmetry will persist when a category is acquired in the presence of mixed category exemplars. An experiment with 3-4-month-olds showed that asymmetric exclusivity persisted in the presence of mixed-exemplar familiarization, thereby confirming the model's prediction.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
hi@scite.ai
334 Leonard St
Brooklyn, NY 11211
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.