SUMMARY Peripheral vision is fundamentally limited not by the visibility of features, but by the spacing between them [1]. When too close together, visual features can become “crowded” and perceptually indistinguishable. Crowding interferes with basic tasks such as letter and face identification, and thus informs our understanding of object recognition breakdown in peripheral vision [2]. Multiple proposals have attempted to explain crowding [3], and each is supported by compelling psychophysical and neuroimaging data [4–6] that are incompatible with competing proposals. In general, perceptual failures have variously been attributed to the averaging of nearby visual signals [7–10], confusions between target and distractor elements [11, 12], and a limited resolution of visual spatial attention [13]. Here we introduce a psychophysical paradigm that allows systematic study of crowded perception within the orientation domain, and we present a unifying computational model of crowding phenomena that reconciles conflicting explanations. Our results show that our single measure produces a variety of perceptual errors that are reported across the crowding literature. Critically, a simple model of the responses of populations of orientation-selective visual neurons accurately predicts all perceptual errors. We thus provide a unifying mechanistic explanation for orientation crowding in peripheral vision. Our simple model accounts for several perceptual phenomena produced by crowding of orientation and raises the possibility that multiple classes of object recognition failures in peripheral vision can be accounted for by a single mechanism.
Our ability to recognize objects in peripheral vision is impaired when other objects are nearby (Bouma, 1970). This phenomenon, known as crowding, is often linked to interactions in early visual processing that depend primarily on the retinal position of visual stimuli (Pelli, 2008; Pelli and Tillman, 2008). Here we tested a new account that suggests crowding is influenced by spatial information derived from an extraretinal signal involved in eye movement preparation. We had human observers execute eye movements to crowded targets and measured their ability to identifythosetargetsjustbeforetheeyesbegantomove.Beginningϳ50msbeforeasaccadetowardacrowdedobject,wefoundthatnotonlywas there a dramatic reduction in the magnitude of crowding, but the spatial area within which crowding occurred was almost halved. These changes in crowding occurred despite no change in the retinal position of target or flanking stimuli. Contrary to the notion that crowding depends on retinal signals alone, our findings reveal an important role for eye movement signals. Eye movement preparation effectively enhances object discrimination in peripheral vision at the goal of the intended saccade. These presaccadic changes may enable enhanced recognition of visual objects in the periphery during active search of visually cluttered environments.
Frontal dynamic aphasia is characterised by a profound reduction in spontaneous speech despite well-preserved naming, repetition and comprehension. Since Luria (1966, 1970) designated this term, two main forms of dynamic aphasia have been identified: one, a language-specific selection deficit at the level of word/sentence generation, associated with left inferior frontal lesions; and two, a domain-general impairment in generating multiple responses or connected speech, associated with more extensive bilateral frontal and/or frontostriatal damage. Both forms of dynamic aphasia have been interpreted as arising due to disturbances in early prelinguistic conceptual preparation mechanisms that are critical for language production. We investigate language-specific and domain-general accounts of dynamic aphasia and address two issues: one, whether deficits in multiple conceptual preparation mechanisms can co-occur; and two, the contribution of broader cognitive processes such as energization, the ability to initiate and sustain response generation over time, to language generation failure. Thus, we report patient WAL who presented with frontal dynamic aphasia in the context of progressive supranuclear palsy (PSP). WAL was given a series of experimental tests that showed that his dynamic aphasia was not underpinned by a language-specific deficit in selection or in microplanning. By contrast, WAL presented with a domain-general deficit in fluent sequencing of novel thoughts. The latter replicated the pattern documented in a previous PSP patient (Robinson, et al., 2006); however, unique to WAL, generating novel thoughts was impaired but there was no evidence of a sequencing deficit because perseveration was absent. Thus, WAL is the first unequivocal case to show a distinction between novel thought generation and subsequent fluent sequencing. Moreover, WAL's generation deficit encompassed verbal and non-verbal responses, showing a similar (but more profoundly reduced) pattern of performance to frontal patients with an energization deficit. In addition to impaired generation of novel thoughts, WAL presented with a concurrent strategy generation deficit, both falling within the second form of dynamic aphasia comprised of domain-general conceptual preparation mechanisms. Thus, within this second form of dynamic aphasia, concurrent deficits can co-occur. Overall, WAL presented with the second form of dynamic aphasia and was impaired in the generation of novel thoughts and internally-generated strategies, in the context of PSP and bilateral frontostriatal damage.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
hi@scite.ai
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.