A large body of research supports the hypothesis that the human visual system does not process a face as a collection of separable facial features but as an integrated perceptual whole. One common assumption is that we quickly build holistic representations to extract useful second-order information provided by the variation between the faces of different individuals. An alternative account suggests holistic processing is a fast, early grouping process that first serves to distinguish faces from other competing objects. From this perspective, holistic processing is a quick initial response to the first-order information present in every face. To test this hypothesis we developed a novel paradigm for measuring the face inversion effect, a standard marker of holistic face processing, that measures the minimum exposure time required to discriminate between two stimuli. These new data demonstrate that holistic processing operates on whole upright faces, regardless of whether subjects are required to extract first- or second-level information. In light of this, we argue that holistic processing is a general mechanism that may occur at an earlier stage of face perception than individual discrimination to support the rapid detection of face stimuli in everyday visual scenes.
This study distinguished between different subclusters of autistic traits in the general population and examined the relationships between these subclusters, looking at the eyes of faces, and the ability to recognize facial identity. Using the Autism Spectrum Quotient (AQ) measure in a university-recruited sample, we separate the social aspects of autistic traits (i.e., those related to communication and social interaction; AQ-Social) from the non-social aspects, particularly attention-to-detail (AQ-Attention). We provide the first evidence that these social and non-social aspects are associated differentially with looking at eyes: While AQ-Social showed the commonly assumed tendency towards reduced looking at eyes, AQ-Attention was associated with increased looking at eyes. We also report that higher attentionto-detail (AQ-Attention) was then indirectly related to improved face recognition, mediated by increased number of fixations to the eyes during face learning. Higher levels of socially relevant autistic traits (AQ-Social) trended in the opposite direction towards being related to poorer face recognition (significantly so in females on the Cambridge Face Memory Test). There was no evidence of any mediated relationship between AQ-Social and face recognition via reduced looking at the eyes. These different effects of AQ-Attention and AQ-Social suggest face-processing studies in Autism Spectrum Disorder might similarly benefit from considering symptom subclusters. Additionally, concerning mechanisms of face recognition, our results support the view that more looking at eyes predicts better face memory.
When two objects are flashed at one location in close temporal proximity in the visual periphery, an intriguing illusion occurs whereby a single flash presented concurrently at another location appears to flash twice (the visual double-flash illusion: Chatterjee et al., 2011, Wilson & Singer, 1981). Here, for the first time, we investigate the time course of the effect, and directly compare it to the time course of the auditory (sound-induced flash illusion) effect, for both fission (single test flash, double inducer) and fusion (double test flash, single inducer) conditions, across stimulus onset asynchronies (SOAs) of 30 to 250 ms. In addition, using a novel audiovisual stimulus, we directly compare the cue strength of the two modalities, and whether they are additive in effect. The results show that the time course of fission and fusion is different for visual inducers, but not for auditory inducers. In audiovisual conditions, in situations of uncertainty, observers tended to follow the more reliable (auditory) cue. There was little evidence for a superadditive effect of auditory and visual cues; rather, observers tended to follow one or the other modality. The results suggest that the visually induced flash illusion and the auditory-induced effect may both stem from perceptual uncertainty, with the difference in time courses attributable to the lower temporal resolution of vision compared to audition.
Temporal integration in the visual system causes fast-moving objects to generate static, oriented traces (‘motion streaks’), which could be used to help judge direction of motion. While human psychophysics and single-unit studies in non-human primates are consistent with this hypothesis, direct neural evidence from the human cortex is still lacking. First, we provide psychophysical evidence that faster and slower motions are processed by distinct neural mechanisms: faster motion raised human perceptual thresholds for static orientations parallel to the direction of motion, whereas slower motion raised thresholds for orthogonal orientations. We then used functional magnetic resonance imaging to measure brain activity while human observers viewed either fast (‘streaky’) or slow random dot stimuli moving in different directions, or corresponding static-oriented stimuli. We found that local spatial patterns of brain activity in early retinotopic visual cortex reliably distinguished between static orientations. Critically, a multivariate pattern classifier trained on brain activity evoked by these static stimuli could then successfully distinguish the direction of fast (‘streaky’) but not slow motion. Thus, signals encoding static-oriented streak information are present in human early visual cortex when viewing fast motion. These experiments show that motion streaks are present in the human visual system for faster motion.
Symmetry is a ubiquitous feature in the visual environment and can be detected by a variety of species, ranging from insects through to humans [1,2]. Here we show it can also bias estimates of basic scene properties. Mirror (reflective) symmetry can be detected in as little as 50 ms, in both natural and artificial visual scenes, and even when embedded within cluttered backgrounds [1]. In terms of its biological relevance, symmetry is a key determinant in mate selection; the degree of symmetry in a face is positively associated with perceived healthiness and attractiveness ratings [3]. In short, symmetry processing mechanisms are an important part of the neural machinery of vision. We reveal that the importance of symmetry extends beyond the processing of shape and objects. Mirror symmetry biases our perception of scene content, with symmetrical patterns appearing to have fewer components than their asymmetric counterparts. This demonstrates an interaction between two fundamental dimensions of visual analysis: symmetry [1] and number [4]. We propose that this numerical underestimation results from a processing bias away from the redundant information within mirror symmetrical displays, extending existing theories regarding redundancy in visual analysis [5,6].
Visually-induced illusions of self-motion (vection) can be compelling for some people, but they are subject to large individual variations in strength. Do these variations depend, at least in part, on the extent to which people rely on vision to maintain their postural stability? We investigated by comparing physical posture measures to subjective vection ratings. Using a Bertec balance plate in a brightly-lit room, we measured 13 participants' excursions of the centre of foot pressure (CoP) over a 60-second period with eyes open and with eyes closed during quiet stance. Subsequently, we collected vection strength ratings for large optic flow displays while seated, using both verbal ratings and online throttle measures. We also collected measures of postural sway (changes in anterior-posterior CoP) in response to the same visual motion stimuli while standing on the plate. The magnitude of standing sway in response to expanding optic flow (in comparison to blank fixation periods) was predictive of both verbal and throttle measures for seated vection. In addition, the ratio between eyes-open and eyes-closed CoP excursions during quiet stance (using the area of postural sway) significantly predicted seated vection for both measures. Interestingly, these relationships were weaker for contracting optic flow displays, though these produced both stronger vection and more sway. Next we used a non-linear analysis (recurrence quantification analysis, RQA) of the fluctuations in anterior-posterior position during quiet stance (both with eyes closed and eyes open); this was a much stronger predictor of seated vection for both expanding and contracting stimuli. Given the complex multisensory integration involved in postural control, our study adds to the growing evidence that non-linear measures drawn from complexity theory may provide a more informative measure of postural sway than the conventional linear measures.
Fast-moving visual features are thought to leave neural 'streaks' that can be detected by orientation-selective cells. Here, we tested whether 'motion streaks' can induce classic tilt aftereffects (TAEs) and tilt illusions (TIs). For TAEs, participants adapted to random arrays of small Gaussian blobs drifting at 9.5 deg/s. Following adaptation to directions of 15, 30, 45, 60, 75, and 90 degrees (clockwise from vertical) subjective vertical was measured for a briefly presented test grating. For TIs, the same motions were presented in an annular surround and subjective vertical was measured for a simultaneously presented central grating. All motions were 50% coherent, with half the blobs following random-walk paths and half following a fixed direction. Strong and weak streaks were compared by varying streak length (the number of fixed-walk frames), rather than by manipulating speed, so that speed and coherence were matched in all conditions. Strong motion streaks produced robust TAEs and TIs, similar in magnitude and orientation tuning to those induced by tilted lines. These effects were weak or absent in weak streak conditions, and when motion was too slow to form streaks. Together, these results indicate that motion streaks produced by temporal integration of fast translating features do effectively adapt orientation-selective cells and may therefore be exploited to improve perception of motion direction as described in the 'motion streaks' model.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
hi@scite.ai
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.