Humans are remarkably good at recognizing biological motion, even when depicted as point-light animations. There is currently some debate as to the relative importance of form and motion cues in the perception of biological motion from the simple dot displays. To investigate this issue, we adapted the "Bubbles" technique, most commonly used in face and object perception, to isolate the critical features for point-light biological motion perception. We find that observer sensitivity waxes and wanes during the course of an action, with peak discrimination performance most strongly correlated with moments of local opponent motion of the extremities. When dynamic cues are removed, instances that are most perceptually salient become the least salient, evidence that the strategies employed during point-light biological motion perception are not effective for recognizing human actions from static patterns. We conclude that local motion features, not global form templates, are most critical for perceiving point-light biological motion. These experiments also present a useful technique for identifying key features of dynamic events.
There is extensive laboratory research studying the effects of acute sleep deprivation on biological and cognitive functions, yet much less is known about naturalistic patterns of sleep loss and the potential impact on daily or weekly functioning of an individual. Longitudinal studies are needed to advance our understanding of relationships between naturalistic sleep and fluctuations in human health and performance, but it is first necessary to understand the efficacy of current tools for long-term sleep monitoring. The present study used wrist actigraphy and sleep log diaries to obtain daily measurements of sleep from 30 healthy adults for up to 16 consecutive weeks. We used non-parametric Bland-Altman analysis and correlation coefficients to calculate agreement between subjectively and objectively measured variables including sleep onset time, sleep offset time, sleep onset latency, number of awakenings, the amount of wake time after sleep onset, and total sleep time. We also examined compliance data on the submission of daily sleep logs according to the experimental protocol. Overall, we found strong agreement for sleep onset and sleep offset times, but relatively poor agreement for variables related to wakefulness including sleep onset latency, awakenings, and wake after sleep onset. Compliance tended to decrease significantly over time according to a linear function, but there were substantial individual differences in overall compliance rates. There were also individual differences in agreement that could be explained, in part, by differences in compliance. Individuals who were consistently more compliant over time also tended to show the best agreement and lower scores on behavioral avoidance scale (BIS). Our results provide evidence for convergent validity in measuring sleep onset and sleep offset with wrist actigraphy and sleep logs, and we conclude by proposing an analysis method to mitigate the impact of non-compliance and measurement errors when the two methods provide discrepant estimates.
Point-light animations of biological motion are perceived quickly and spontaneously, giving rise to an irresistible sensation of animacy. However, the mechanisms that support judgments of animacy based on biological motion remain unclear. The current study demonstrates that animacy ratings increase when a spatially scrambled animation of human walking maintains consistency with two fundamental constraints: the direction of gravity and congruency between the directions of intrinsic and extrinsic motion. Furthermore, using a reverse-correlation method, we show that observers employ structural templates, or form-based "priors," reflecting the prototypical mammalian body plan when attributing animacy to scrambled human forms. These findings reveal that perception of animacy in scrambled biological motion involves not only analysis of local intrinsic motion, but also its congruency with global extrinsic motion and global spatial structure. Thus, they suggest a strong influence of prior knowledge about characteristic features of creatures in the natural environment.
Among the most common events in our daily lives is seeing people in action. Scientists have accumulated evidence suggesting humans may have developed specialized mechanisms for recognizing these visual events. In the current experiments, we apply the "bubbles" technique to construct space-time classification movies that reveal the key features human observers use to discriminate biological motion stimuli (point-light and stick figure walkers). We find that observers rely on similar features for both types of stimuli, namely, form information in the upper body and dynamic information in the relative motion of the limbs. To measure the contributions of motion and form analyses in this task, we computed classification movies from the responses of a biologically plausible model that can discriminate biological motion patterns (M. A. Giese & T. Poggio, 2003). The model classification movies reveal similar key features to observers, with the model's motion and form pathways each capturing unique aspects of human performance. In a second experiment, we computed classification movies derived from trials of varying exposure times (67-267 ms) and demonstrate the transition to form-based strategies as motion information becomes less available. Overall, these results highlight the relative contributions of motion and form computations to biological motion perception.
Existing data indicate that cortical speech processing is hierarchically organized. Numerous studies have shown that early auditory areas encode fine acoustic details while later areas encode abstracted speech patterns. However, it remains unclear precisely what speech information is encoded across these hierarchical levels. Estimation of speech-driven spectrotemporal receptive fields (STRFs) provides a means to explore cortical speech processing in terms of acoustic or linguistic information associated with characteristic spectrotemporal patterns. Here, we estimate STRFs from cortical responses to continuous speech in fMRI. Using a novel approach based on filtering randomly-selected spectrotemporal modulations (STMs) from aurally-presented sentences, STRFs were estimated for a group of listeners and categorized using a data-driven clustering algorithm. 'Behavioral STRFs' highlighting STMs crucial for speech recognition were derived from intelligibility judgments. Clustering revealed that STRFs in the supratemporal plane represented a broad range of STMs, while STRFs in the lateral temporal lobe represented circumscribed STM patterns important to intelligibility. Detailed analysis recovered a bilateral organization with posterior-lateral regions preferentially processing STMs associated with phonological information and anterior-lateral regions preferentially processing STMs associated with word-and phrase-level information. Regions in lateral Heschl's gyrus preferentially processed STMs associated with vocalic information (pitch).
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
hi@scite.ai
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.