The ability to perceive facial motion is important to successfully interact in social environments. Previously, imaging studies have investigated neural correlates of facial motion primarily using abstract motion stimuli. Here, we studied how the brain processes natural non-rigid facial motion in direct comparison to static stimuli and matched phase-scrambled controls. As predicted from previous studies, dynamic faces elicit higher responses than static faces in lateral temporal areas corresponding to hMT+/V5 and STS. Interestingly, individually deWned, static-face-sensitive regions in bilateral fusiform gyrus and left inferior occipital gyrus also respond more to dynamic than static faces. These results suggest integration of form and motion information during the processing of dynamic faces even in ventral temporal and inferior lateral occipital areas. In addition, our results show that dynamic stimuli are a robust tool to localize areas related to the processing of static and dynamic face information.
Previous research suggests that visual attention can be allocated to locations in space (space-based attention) and to objects (object-based attention). The cueing effects associated with space-based attention tend to be large and are found consistently across experiments. Object-based attention effects, however, are small and found less consistently across experiments. In three experiments we address the possibility that variability in object-based attention effects across studies reflects low incidence of such effects at the level of individual subjects. Experiment 1 measured space-based and object-based cueing effects for horizontal and vertical rectangles in 60 subjects comparing commonly used target detection and discrimination tasks. In Experiment 2 we ran another 120 subjects in a target discrimination task in which rectangle orientation varied between subjects. Using parametric statistical methods, we found object-based effects only for horizontal rectangles. Bootstrapping methods were used to measure effects in individual subjects. Significant space-based cueing effects were found in nearly all subjects in both experiments, across tasks and rectangle orientations. However, only a small number of subjects exhibited significant object-based cueing effects. Experiment 3 measured only object-based attention effects using another common paradigm and again, using bootstrapping, we found only a small number of subjects that exhibited significant object-based cueing effects. Our results show that object-based effects are more prevalent for horizontal rectangles, which is in accordance with the theory that attention may be allocated more easily along the horizontal meridian. The fact that so few individuals exhibit a significant object-based cueing effect presumably is why previous studies of this effect might have yielded inconsistent results. The results from the current study highlight the importance of considering individual subject data in addition to commonly used statistical methods.
Recently there has been growing interest in the role that motion might play in the perception and representation of facial identity. Most studies have considered old/new recognition as a task. However, especially for non-rigid motion, these studies have often produced contradictory results. Here, we used a delayed visual search paradigm to explore how learning is affected by non-rigid facial motion. In the current studies we trained observers on two frontal view faces, one moving non-rigidly, the other a static picture. After a delay, observers were asked to identify the targets in static search arrays containing 2, 4 or 6 faces. On a given trial target and distractor faces could be shown in one of five viewpoints, frontal, 22 degrees or 45 degrees to the left or right. We found that familiarizing observers with dynamic faces led to a constant reaction time advantage across all setsizes and viewpoints compared to static familiarization. This suggests that non-rigid motion affects identity decisions even across extended periods of time and changes in viewpoint. Furthermore, it seems as if such effects may be difficult to observe using more traditional old/new recognition tasks.
Previous studies have shown that older subjects have difficulties discriminating the walking direction of point-light walkers. In two experiments, we investigated the underlying cause in further detail. In Experiment 1, subjects had to discriminate the walking direction of upright and inverted point-light walkers in a cloud of randomly moving dots. In general, older subjects performed less accurately and showed an increased inversion effect. Nevertheless, they were as accurate as young subjects for upright walkers during training, in which no noise was added to the display. These results indicate that older subjects are less able to extract relevant information from noisy displays. In Experiment 2, subjects discriminated the walking direction of scrambled walkers that primarily contained local motion information, random-position walkers that primarily contained global form information, and normal point-light walkers that contained both kinds of information. Both age groups performed at chance when no global form information was present in the display but were equally accurate for walkers that only contained global form information. However, when both local motion and global form information were present in the display, older subjects were less accurate then younger subjects. Older subjects again exhibited an increased inversion effect. These results indicate that both older and younger subjects rely more on global form than local motion to discriminate the direction of point-light walkers. Also, older subjects seem to have difficulties integrating global form and local motion information as efficiently as younger subjects.
Facial motion carries essential information about other people's emotions and intentions. Most previous studies have suggested that facial motion is mainly processed in the superior temporal sulcus (STS), but several recent studies have also shown involvement of ventral temporal face-sensitive regions. Up to now, it is not known whether the increased response to facial motion is due to an increased amount of static information in the stimulus, to the deformation of the face over time, or to increased attentional demands. We presented nonrigidly moving faces and control stimuli to participants performing a demanding task unrelated to the face stimuli. We manipulated the amount of static information by using movies with different frame rates. The fluidity of the motion was manipulated by presenting movies with frames either in the order in which they were recorded or in scrambled order. Results confirm higher activation for moving compared with static faces in STS and under certain conditions in ventral temporal face-sensitive regions. Activation was maximal at a frame rate of 12.5 Hz and smaller for scrambled movies. These results indicate that both the amount of static information and the fluid facial motion per se are important factors for the processing of dynamic faces.
Despite well-established sex differences for cognition, audition, and somatosensation, few studies have investigated whether there are also sex differences in visual perception. We report the results of fifteen perceptual measures (such as visual acuity, visual backward masking, contrast detection threshold or motion detection) for a cohort of over 800 participants. On six of the fifteen tests, males significantly outperformed females. On no test did females significantly outperform males. Given this heterogeneity of the sex effects, it is unlikely that the sex differences are due to any single mechanism. A practical consequence of the results is that it is important to control for sex in vision research, and that findings of sex differences for cognitive measures using visually based tasks should confirm that their results cannot be explained by baseline sex differences in visual perception.
We used fMRI to investigate the effects of tactile co-activation on the topographic organization of the human primary somatosensory cortex (SI). Behavioral consequences of co-activation were studied in a psychophysical task assessing the mislocalization of tactile stimuli. Co-activation was applied to the index, middle and ring fingers of the right hand either synchronously or asynchronously. Cortical representations for synchronously co-activated fingers moved closer together, whereas cortical representations for asynchronously co-activated fingers became segregated. Behaviorally, this pattern coincided with an increased and reduced number of mislocalizations between synchronously and asynchronously co-activated fingers, respectively. Thus, both synchronous and asynchronous coupling of passive tactile stimulation is able to induce short-term cortical reorganization associated with functionally relevant changes.
Perceptual functions change with age, particularly motion perception. With regard to healthy aging, previous studies mostly measured motion coherence thresholds for coarse motion direction discrimination along cardinal axes of motion. Here, we investigated age-related changes in the ability to discriminate between small angular differences in motion directions, which allows for a more specific assessment of age-related decline and its underlying mechanisms. We first assessed older (>60 years) and younger (<30 years) participants' ability to discriminate coarse horizontal (left/right) and vertical (up/down) motion at 100% coherence and a stimulus duration of 400 ms. In a second step, we determined participants' motion coherence thresholds for vertical and horizontal coarse motion direction discrimination. In a third step, we used the individually determined motion coherence thresholds and tested fine motion direction discrimination for motion clockwise away from horizontal and vertical motion. Older adults performed as well as younger adults for discriminating motion away from vertical. Surprisingly, performance for discriminating motion away from horizontal was strongly decreased. Further analyses, however, showed a relationship between motion coherence thresholds for horizontal coarse motion direction discrimination and fine motion direction discrimination performance in older adults. In a control experiment, using motion coherence above threshold for all conditions, the difference in performance for horizontal and vertical fine motion direction discrimination for older adults disappeared. These results clearly contradict the notion of an overall age-related decline in motion perception, and, most importantly, highlight the importance of taking into account individual differences when assessing age-related changes in perceptual functions.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
hi@scite.ai
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.