A multiple case study was conducted in order to assess three leading theories of developmental dyslexia: (i) the phonological theory, (ii) the magnocellular (auditory and visual) theory and (iii) the cerebellar theory. Sixteen dyslexic and 16 control university students were administered a full battery of psychometric, phonological, auditory, visual and cerebellar tests. Individual data reveal that all 16 dyslexics suffer from a phonological deficit, 10 from an auditory deficit, four from a motor deficit and two from a visual magnocellular deficit. Results suggest that a phonological deficit can appear in the absence of any other sensory or motor disorder, and is sufficient to cause a literacy impairment, as demonstrated by five of the dyslexics. Auditory disorders, when present, aggravate the phonological deficit, hence the literacy impairment. However, auditory deficits cannot be characterized simply as rapid auditory processing problems, as would be predicted by the magnocellular theory. Nor are they restricted to speech. Contrary to the cerebellar theory, we find little support for the notion that motor impairments, when found, have a cerebellar origin or reflect an automaticity deficit. Overall, the present data support the phonological theory of dyslexia, while acknowledging the presence of additional sensory and motor disorders in certain individuals.
Three classes of perceptual phenomena have repeatedly been associated with autism spectrum disorder (ASD): superior processing of fine detail (local structure), either inferior processing of overall/global structure or an ability to ignore disruptive global/contextual information, and impaired motion perception. This review evaluates the quality of the evidence bearing on these three phenomena. We argue that while superior local processing has been robustly demonstrated, conclusions about global processing cannot be definitively drawn from the experiments to date, which have generally not precluded observers using more local cues. Perception of moving stimuli is impaired in ASD, but explanations in terms of magnocellular/dorsal deficits do not appear to be sufficient. We suggest that abnormalities in the superior temporal sulcus (STS) may provide a neural basis for the range of motion-processing deficits observed in ASD, including biological motion perception. Such an explanation may also provide a link between perceptual abnormalities and specific deficits in social cognition associated with autism.
There is considerable interest in how humans estimate the number of objects in a scene in the context of an extensive literature on how we estimate the density (i.e., spacing) of objects. Here, we show that our sense of number and our sense of density are intertwined. Presented with two patches, observers found it more difficult to spot differences in either density or numerosity when those patches were mismatched in overall size, and their errors were consistent with larger patches appearing both denser and more numerous. We propose that density is estimated using the relative response of mechanisms tuned to low and high spatial frequencies (SFs), because energy at high SFs is largely determined by the number of objects, whereas low SF energy depends more on the area occupied by elements. This measure is biased by overall stimulus size in the same way as human observers, and by estimating number using the same measure scaled by relative stimulus size, we can explain all of our results. This model is a simple, biologically plausible common metric for perceptual number and density.psychophysics | vision | texture | numerical cognition
Much work in the cognitive neuroscience of schizophrenia has focused on attention, memory, and executive functioning. To date, less work has focused on perceptual processing. However, perceptual functions are frequently disrupted in schizophrenia, and thus this domain has been included in the CNTRICS (Cognitive Neuroscience Treatment Research to Improve Cognition in Schizophrenia) project. In this article, we describe the basic science presentation and the breakout group discussion on the topic of perception from the first CNTRICS meeting, held in Bethesda, Maryland on February 26 and 27, 2007. The importance of perceptual dysfunction in schizophrenia, the nature of perceptual abnormalities in this disorder, and the critical need to develop perceptual tests appropriate for future clinical trials were discussed. Although deficits are also seen in auditory, olfactory, and somatosensory processing in schizophrenia, the first CNTRICS meeting focused on visual processing deficits. Key concepts of gain control and integration in visual perception were introduced. Definitions and examples of these concepts are provided in this article. Use of visual gain control and integration fit a number of the criteria suggested by the CNTRICS committee, provide fundamental constructs for understanding the visual system in schizophrenia, and are inclusive of both lower-level and higher-level perceptual deficits.
Channel-based models of human spatial vision require that the output of spatial filters be pooled across space. This pooling yields global estimates of local feature attributes such as orientation that are useful in situations in which that attribute may be locally variable, as is the case for visual texture. The spatial characteristics of orientation summation are considered in the study. By assessing the effect of orientation variability on observers' ability to estimate the mean orientation of spatially unstructured textures, one can determine both the internal noise on each orientation sample and the number of samples being pooled. By a combination of fixing and covarying the size of textured regions and the number of elements constituting them, one can then assess the effects of the texture's size, density, and numerosity (the number of elements present) on the internal noise and the sampling density. Results indicate that internal noise shows a primary dependence on texture density but that, counterintuitively, subjects rely on a sample size approximately equal to a fixed power of the number of samples present, regardless of their spatial arrangement. Orientation pooling is entirely flexible with respect to the position of input features.
When making choices under conditions of perceptual uncertainty, past experience can play a vital role. However, it can also lead to biases that worsen decisions. Consistent with previous observations, we found that human choices are influenced by the success or failure of past choices even in a standard two-alternative detection task, where choice history is irrelevant. The typical bias was one that made the subject switch choices after a failure. These choice history biases led to poorer performance and were similar for observers in different countries. They were well captured by a simple logistic regression model that had been previously applied to describe psychophysical performance in mice. Such irrational biases seem at odds with the principles of reinforcement learning, which would predict exquisite adaptability to choice history. We therefore asked whether subjects could adapt their irrational biases following changes in trial order statistics. Adaptability was strong in the direction that confirmed a subject’s default biases, but weaker in the opposite direction, so that existing biases could not be eradicated. We conclude that humans can adapt choice history biases, but cannot easily overcome existing biases even if irrational in the current context: adaptation is more sensitive to confirmatory than contradictory statistics.
The structure of the human face allows it to signal a wide range of useful information about a person's gender, identity, mood, etc. We show empirically that facial identity information is conveyed largely via mechanisms tuned to horizontal visual structure. Specifically observers perform substantially better at identifying faces that have been filtered to contain just horizontal information compared to any other orientation band. We then show, computationally, that horizontal structures within faces have an unusual tendency to fall into vertically co-aligned clusters compared with images of natural scenes. We call these clusters "bar codes" and propose that they have important computational properties. We propose that it is this property makes faces "special" visual stimuli because they are able to transmit information as reliable spatial sequence: a highly constrained one-dimensional code. We show that such structure affords computational advantages for face detection and decoding, including robustness to normal environmental image degradation, but makes faces vulnerable to certain classes of transformation that change the sequence of bars such as spatial inversion or contrast-polarity reversal.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
hi@scite.ai
334 Leonard St
Brooklyn, NY 11211
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.