a b s t r a c tIn cognition, audition, and somatosensation, performance strongly correlates between different paradigms, which suggests the existence of common factors. In contrast, visual performance in seemingly very similar tasks, such as visual and bisection acuity, are hardly related, i.e., pairwise correlations between performance levels are low even though test-retest reliability is high. Here we show similar results for visual illusions. Consistent with previous findings, we found significant correlations between the illusion magnitude of the Ebbinghaus and Ponzo illusions, but this relationship was the only significant correlation out of 15 further comparisons. Similarly, we found a significant link for the Ponzo illusion with both mental imagery and cognitive disorganization. However, most other correlations between illusions and personality were not significant. The findings suggest that vision is highly specific, i.e., there is no common factor. While this proposal does not exclude strong and stable associations between certain illusions and between certain illusions and personality traits, these associations seem to be the exception rather than the rule.
Common factors are ubiquitous. For example, there is a common factor, g, for intelligence. In vision, there is much weaker evidence for such common factors. For example, visual illusion magnitudes correlate only weakly with each other. Here, we investigated whether illusions are hyper-specific as in perceptual learning. First, we tested 19 variants of the Ebbinghaus illusion that differed in color, shape, or texture. Correlations between the illusion magnitudes of the different variants were mostly significant. Second, we reanalyzed a dataset from a previous experiment where 10 illusions were tested under four conditions of luminance and found significant correlations between the different luminance conditions of each illusion. However, there were only very weak correlations between the 10 different illusions. Third, five visual illusions were tested with four orientations. Again, there were significant correlations between the four orientations of each illusion, but not across different illusions. The weak inter-illusion correlations suggest that there is no unique common mechanism for the tested illusions. We suggest that most illusions make up their own factor.
Despite well-established sex differences for cognition, audition, and somatosensation, few studies have investigated whether there are also sex differences in visual perception. We report the results of fifteen perceptual measures (such as visual acuity, visual backward masking, contrast detection threshold or motion detection) for a cohort of over 800 participants. On six of the fifteen tests, males significantly outperformed females. On no test did females significantly outperform males. Given this heterogeneity of the sex effects, it is unlikely that the sex differences are due to any single mechanism. A practical consequence of the results is that it is important to control for sex in vision research, and that findings of sex differences for cognitive measures using visually based tasks should confirm that their results cannot be explained by baseline sex differences in visual perception.
Vision scientists have attempted to classify visual illusions according to certain aspects, such as brightness or spatial features. For example, Piaget proposed that visual illusion magnitudes either decrease or increase with age. Subsequently, it was suggested that illusions are segregated according to their context: real-world contexts enhance and abstract contexts inhibit illusion magnitudes with age. We tested the effects of context on the Müller-Lyer and Ponzo illusions with a standard condition (no additional context), a line-drawing perspective condition, and a real-world perspective condition. A mixed-effects model analysis, based on data from 76 observers with ages ranging from 6 to 66 years, did not reveal any significant interaction between context and age. Although we found strong intra-illusion correlations for both illusions, we found only weak inter-illusion correlations, suggesting that the structure underlying these two spatial illusions includes several specific factors.
Perceptual learning is usually assumed to occur within sensory areas or when sensory evidence is mapped onto decisions. Subsequent procedural and motor processes, involved in most perceptual learning experiments, are thought to play no role in the learning process. Here, we show that this is not the case. Observers trained with a standard three-line bisection task and indicated the offset direction of the central line by pressing either a left or right push button. Before and after training, observers adjusted the central line of the same bisection stimulus using a computer mouse. As expected, performance improved through training. Surprisingly, learning did not transfer to the untrained mouse adjustment condition. The same was true for the opposite, i.e., training with mouse adjustments did not transfer to the push button condition. We found partial transfer when observers adjusted the central line with two different adjustment procedures. We suggest that perceptual learning is specific to procedural motor aspects beyond visual processing. Our results support theories were visual stimuli are coded together with their corresponding actions.
Across saccadic eye movements, the visual system receives two successive static images corresponding to the pre- and the postsaccadic projections of the visual field on the retina. The existence of a mechanism integrating the content of these images is today still a matter of debate. Here, we studied the transfer of a visual feature across saccades using a blanking paradigm. Participants moved their eyes to a peripheral grating and discriminated a change in its orientation occurring during the eye movement. The grating was either constantly on the screen or briefly blanked during and after the saccade. Moreover, it either was of the same luminance as the background (i.e., isoluminant) or anisoluminant with respect to it. We found that for anisoluminant grating, the orientation discrimination across saccade was improved when a blank followed the onset of the eye movement. Such effect was however abolished with isoluminant grating. Additionally, performance was also improved when an anisoluminant grating presented before the saccade was followed by an isoluminant one. These results demonstrate that a detailed representation of the presaccadic image was transferred across saccades allowing participants to perform better on the trans-saccadic orientation task. While such a transfer of visual orientation across saccade is masked in real-life anisoluminant conditions, the use of a blank and of isoluminant postsaccadic grating allowed here to reveal its existence.Significance statementStatic objects are perceived as not moving across eye movements despite their visual projection shifts on our retina. To compensate for such shifts and create a continuous perception of space, our brain may keep track of objects’ visual features across our movements. We found that shortly blanking a contrast-defined object during and after saccades allows to recover a detailed representation of its orientation. We propose that the transfer of visual content across saccades revealed with the use of a simple blank plays an important role in ensuring our continuous and stable perception of the world.
Active sensing has important consequences on multisensory processing (Schroeder et al., 2010). Here, we asked whether in the absence of saccades, the position of the eyes and the timing of transient color changes of visual stimuli could selectively affect the excitability of auditory cortex by predicting the “where” and the “when” of a sound, respectively. Human participants were recorded with magnetoencephalography (MEG) while maintaining the position of their eyes on the left, right, or center of the screen. Participants counted color changes of the fixation cross while neglecting sounds which could be presented to the left, right, or both ears. First, clear alpha power increases were observed in auditory cortices, consistent with participants' attention directed to visual inputs. Second, color changes elicited robust modulations of auditory cortex responses (“when” prediction) seen as ramping activity, early alpha phase-locked responses, and enhanced high-gamma band responses in the contralateral side of sound presentation. Third, no modulations of auditory evoked or oscillatory activity were found to be specific to eye position. Altogether, our results suggest that visual transience can automatically elicit a prediction of “when” a sound will occur by changing the excitability of auditory cortices irrespective of the attended modality, eye position or spatial congruency of auditory and visual events. To the contrary, auditory cortical responses were not significantly affected by eye position suggesting that “where” predictions may require active sensing or saccadic reset to modulate auditory cortex responses, notably in the absence of spatial orientation to sounds.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
hi@scite.ai
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.