The ability to flexibly adapt one’s behavior is critical for social tasks such as speech and music performance, in which individuals must coordinate the timing of their actions with others. Natural movement frequencies, also called spontaneous rates, constrain synchronization accuracy between partners during duet music performance, whereas musical training enhances synchronization accuracy. We investigated the combined influences of these factors on the flexibility with which individuals can synchronize their actions with sequences at different rates. First, we developed a novel musical task capable of measuring spontaneous rates in both musicians and non-musicians in which participants tapped the rhythm of a familiar melody while hearing the corresponding melody tones. The novel task was validated by similar measures of spontaneous rates generated by piano performance and by the tapping task from the same pianists. We then implemented the novel task with musicians and non-musicians as they synchronized tapping of a familiar melody with a metronome at their spontaneous rates, and at rates proportionally slower and faster than their spontaneous rates. Musicians synchronized more flexibly across rates than non-musicians, indicated by greater synchronization accuracy. Additionally, musicians showed greater engagement of error correction mechanisms than non-musicians. Finally, differences in flexibility were characterized by more recurrent (repetitive) and patterned synchronization in non-musicians, indicative of greater temporal rigidity.
Although it is understood that episodic memories of everyday events involve encoding a wide array of perceptual and non-perceptual information, it is unclear how these distinct types of information are recalled. To address this knowledge gap, we examine how perceptual (visual versus auditory) and non-perceptual details described within a narrative, a proxy for everyday event memories, were retrieved. Based on previous work indicating a bias for visual content, we hypothesized that participants would be most accurate at recalling visually described details and would tend to falsely recall non-visual details with visual descriptors. In Study 1, participants watched videos of a protagonist telling narratives of everyday events under three conditions: with visual, auditory, or audiovisual details. All narratives contained the same non-perceptual content. Participants’ free recall of these narratives under each condition were scored for the type of details recalled (perceptual, non-perceptual) and whether the detail was recalled with gist or verbatim memory. We found that participants were more accurate at gist and verbatim recall for visual perceptual details. This visual bias was also evident when we examined the errors made during recall such that participants tended to incorrectly recall details with visual information, but not with auditory information. Study 2 tested for this pattern of results when the narratives were presented in auditory only format. Results conceptually replicated Study 1 in that there was still a persistent visual bias in what was recollected from the complex narratives. Together, these findings indicate a bias for recruiting visualizable content to construct complex multi-detail memories.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
hi@scite.ai
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.