A fundamental question about the perception of time is whether the neural mechanisms underlying temporal judgements are universal and centralized in the brain or modality specific and distributed. Time perception has traditionally been thought to be entirely dissociated from spatial vision. Here we show that the apparent duration of a dynamic stimulus can be manipulated in a local region of visual space by adapting to oscillatory motion or flicker. This implicates spatially localized temporal mechanisms in duration perception. We do not see concomitant changes in the time of onset or offset of the test patterns, demonstrating a direct local effect on duration perception rather than an indirect effect on the time course of neural processing. The effects of adaptation on duration perception can also be dissociated from motion or flicker perception per se. Although 20 Hz adaptation reduces both the apparent temporal frequency and duration of a 10 Hz test stimulus, 5 Hz adaptation increases apparent temporal frequency but has little effect on duration perception. We conclude that there is a peripheral, spatially localized, essentially visual component involved in sensing the duration of visual events.
Head and facial movements can provide valuable cues to identity in addition to their primary roles in communicating speech and expression [1-8]. Here we report experiments in which we have used recent motion capture and animation techniques to animate an average head [9]. These techniques have allowed the isolation of motion from other cues and have enabled us to separate rigid translations and rotations of the head from nonrigid facial motion. In particular, we tested whether human observers can judge sex and identity on the basis of this information. Results show that people can discriminate both between individuals and between males and females from motion-based information alone. Rigid head movements appear particularly useful for categorization on the basis of identity, while nonrigid motion is more useful for categorization on the basis of sex. Accuracy for both sex and identity judgements is reduced when faces are presented upside down, and this finding shows that performance is not based on low-level motion cues alone and suggests that the information is represented in an object-based motion-encoding system specialized for upright faces. Playing animations backward also reduced performance for sex judgements and emphasized the importance of direction specificity in admitting access to stored representations of characteristic male and female movements.
We propose that the perception of the relative time of events is based on the relationship of representations of temporal pattern that we term time markers. We conclude that the perceptual asynchrony effects studied here do not reflect differential neural delays for different attributes; rather, they arise from a faulty correspondence match between color transitions and position transitions (motion), which in turn results from a difficulty in detecting turning points (direction reversals) and a preference for matching markers of the same type.
When information about three-dimensional shape obtained from shading and shadows is ambiguous, the visual system favours an interpretation of surface geometry which is consistent with illumination from above. If pictures of top-lit faces are rotated the resulting stimulus is both figurally inverted and illuminated from below. In this study the question of whether the effects of figural inversion and lighting orientation on face recognition are independent or interactive is addressed. Although there was a clear inversion effect for faces illuminated from the front and above, the inversion effect was found to be reduced or eliminated for faces illuminated from below. A strong inversion effect for photographic negatives was also found but in this case the effect was not dependent on the direction of illumination. These findings are interpreted as evidence to suggest that lighting faces from below disrupts the formation of surface-based representations of facial shape.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.