Young men and women were compared on the speeded repetition of speech (ns = 20 and 18, respectively) and manual movements (ns = 37 and 38). The repetition of a single speech or manual movement was used as a measure of baseline speed, against which to compare a sequence of movements. Males tended to be faster at repeating a single movement, but using baseline speed as a covariate resulted in a female advantage for the repetition of a sequence of movements. It was concluded that men have a basic motor-speed advantage, but that women may be faster at programming a sequence of speech or manual movements. The results are discussed with respect to sexual variation in the neural organization of motor programming systems.
The distinction between the processing of musical information and segmental speech information (i.e., consonants and vowels) has been much explored. In contrast, the relationship between the processing of music and prosodic speech information (e.g., intonation) has been largely ignored. We report an assessment of prosodic perception for an amateur musician, KB, who became amusic following a right-hemisphere stroke. Relative to matched controls, KB's segmental speech perception was preserved. However, KB was unable to discriminate pitch or rhythm patterns in linguistic or musical stimuli. He was also impaired on prosodic perception tasks (e.g., discriminating statements from questions). Results are discussed in terms of common neural mechanisms that may underlie the processing of some aspects of both music and speech prosody.
Although visual object recognition is primarily shape driven, colour assists the recognition of some objects. It is unclear, however, just how colour information is coded with respect to shape in long-term memory and how the availability of colour in the visual image facilitates object recognition. We examined the role of colour in the recognition of novel, 3-D objects by manipulating the congruency of object colour across the study and test phases, using an old/new shape-identification task. In experiment 1, we found that participants were faster at correctly identifying old objects on the basis of shape information when these objects were presented in their original colour, rather than in a different colour. In experiments 2 and 3, we found that participants were faster at correctly identifying old objects on the basis of shape information when these objects were presented with their original part-colour conjunctions, rather than in different or in reversed part-colour conjunctions. In experiment 4, we found that participants were quite poor at the verbal recall of part-colour conjunctions for correctly identified old objects, presented as grey-scale images at test. In experiment 5, we found that participants were significantly slower at correctly identifying old objects when object colour was incongruent across study and test, than when background colour was incongruent across study and test. The results of these experiments suggest that both shape and colour information are stored as part of the long-term representation of these novel objects. Results are discussed in terms of how colour might be coded with respect to shape in stored object representations.
It is well established that vision plays a role in segmental speech perception, but the role of vision in prosodic speech perception is less clear. We report on the difficulties in prosodic speech perception encountered by KB after a right hemisphere stroke. In addition to musical deficits, KB was suspected of having impaired auditory prosody perception. As expected, KB was impaired on two prosody perception tasks in an auditory-only condition. We also examined whether the addition of visual prosody cues would facilitate his performance on these tasks. Unexpectedly, KB was also impaired on both tasks under visual-only and audio-visual conditions. Thus, there was no evidence that KB could integrate auditory and visual prosody information or that he could use visual cues to compensate for his deficit in the auditory domain. In contrast, KB was able to identify segmental speech information using visual cues and to use these visual cues to improve his performance when auditory segmental cues were impoverished. KB was also able to integrate audio-visual segmental information in the McGurk effect. Thus, KB's visual deficit was specific to prosodic speech perception and, to our knowledge, this is the first reported case of such a deficit.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
hi@scite.ai
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.