Interactive generative musical performance provides a suitable model for communication because, like natural linguistic discourse, it involves an exchange of ideas that is unpredictable, collaborative, and emergent. Here we show that interactive improvisation between two musicians is characterized by activation of perisylvian language areas linked to processing of syntactic elements in music, including inferior frontal gyrus and posterior superior temporal gyrus, and deactivation of angular gyrus and supramarginal gyrus, brain structures directly implicated in semantic processing of language. These findings support the hypothesis that musical discourse engages language areas of the brain specialized for processing of syntax but in a manner that is not contingent upon semantic processing. Therefore, we argue that neural regions for syntactic processing are not domain-specific for language but instead may be domain-general for communication.
The aim of this study was to determine if the angular vestibulo-ocular reflex (VOR) in response to pitch, roll, left anterior-right posterior (LARP), and right anterior-left posterior (RALP) head rotations exhibited the same linear and nonlinear characteristics as those found in the horizontal VOR. Three-dimensional eye movements were recorded with the scleral search coil technique. The VOR in response to rotations in five planes (horizontal, vertical, torsional, LARP, and RALP) was studied in three squirrel monkeys. The latency of the VOR evoked by steps of acceleration in darkness (3,000 degrees /s(2) reaching a velocity of 150 degrees /s) was 5.8+/-1.7 ms and was the same in response to head rotations in all five planes of rotation. The gain of the reflex during the acceleration was 36.7+/-15.4% greater than that measured at the plateau of head velocity. Polynomial fits to the trajectory of the response show that eye velocity is proportional to the cube of head velocity in all five planes of rotation. For sinusoidal rotations of 0.5-15 Hz with a peak velocity of 20 degrees /s, the VOR gain did not change with frequency (0.74+/-0.06, 0.74+/-0.07, 0.37+/-0.05, 0.69+/-0.06, and 0.64+/-0.06, for yaw, pitch, roll, LARP, and RALP respectively). The VOR gain increased with head velocity for sinusoidal rotations at frequencies > or =4 Hz. For rotational frequencies > or =4 Hz, we show that the vertical, torsional, LARP, and RALP VORs have the same linear and nonlinear characteristics as the horizontal VOR. In addition, we show that the gain, phase and axis of eye rotation during LARP and RALP head rotations can be predicted once the pitch and roll responses are characterized.
Bass frequency impairments contribute to sound quality deteriorations during music listening for CI users. CI-MUSHRA provided a systematic and quantitative assessment of this reduced sound quality. Although the effects of bass frequency removal were studied here, we advocate CI-MUSHRA as a user-friendly and versatile research tool to measure the effects of a wide range of acoustic manipulations on sound quality perception in CI users.
Emotion is a primary motivator for creative behaviors, yet the interaction between the neural systems involved in creativity and those involved in emotion has not been studied. In the current study, we addressed this gap by using fMRI to examine piano improvisation in response to emotional cues. We showed twelve professional jazz pianists photographs of an actress representing a positive, negative or ambiguous emotion. Using a non-ferromagnetic thirty-five key keyboard, the pianists improvised music that they felt represented the emotion expressed in the photographs. Here we show that activity in prefrontal and other brain networks involved in creativity is highly modulated by emotional context. Furthermore, emotional intent directly modulated functional connectivity of limbic and paralimbic areas such as the amygdala and insula. These findings suggest that emotion and creativity are tightly linked, and that the neural mechanisms underlying creativity may depend on emotional state.
Objective: Cochlear implant (CI) users struggle with tasks of pitch-based prosody perception. Pitch pattern recognition is vital for both music comprehension and understanding the prosody of speech, which signals emotion and intent. Research in normal-hearing individuals shows that auditory-motor training, in which participants produce the auditory pattern they are learning, is more effective than passive auditory training. We investigated whether auditory-motor training of CI users improves complex sound perception, such as vocal emotion recognition and pitch pattern recognition, compared with purely auditory training. Study Design: Prospective cohort study. Setting: Tertiary academic center. Patients: Fifteen postlingually deafened adults with CIs. Intervention(s): Participants were divided into 3 one-month training groups: auditory-motor (intervention), auditory-only (active control), and no training (control). Auditory-motor training was conducted with the “Contours” software program and auditory-only training was completed with the “AngelSound” software program. Main Outcome Measure: Pre and posttest examinations included tests of speech perception (consonant–nucleus–consonant, hearing-in-noise test sentence recognition), speech prosody perception, pitch discrimination, and melodic contour identification. Results: Participants in the auditory-motor training group performed better than those in the auditory-only and no-training (p < 0.05) for the melodic contour identification task. No significant training effect was noted on tasks of speech perception, speech prosody perception, or pitch discrimination. Conclusions: These data suggest that short-term auditory-motor music training of CI users impacts pitch pattern recognition. This study offers approaches for enriching the world of complex sound in the CI user.
Cochlear implant (CI) biomechanical constraints result in impoverished spectral cues and poor frequency resolution, making it difficult for users to perceive pitch and timbre. There is emerging evidence that music training may improve CI-mediated music perception; however, much of the existing studies involve time-intensive and less readily accessible in-person music training paradigms, without rigorous experimental control paradigms. Online resources for auditory rehabilitation remain an untapped potential resource for CI users. Furthermore, establishing immediate value from an acute music training program may encourage CI users to adhere to post-implantation rehabilitation exercises. In this study, we evaluated the impact of an acute online music training program on pitch discrimination and timbre identification. Via a randomized controlled crossover study design, 20 CI users and 21 normal hearing (NH) adults were assigned to one of two arms. Arm-A underwent 1 month of online self-paced music training (intervention) followed by 1 month of audiobook listening (control). Arm-B underwent 1 month of audiobook listening followed by 1 month of music training. Pitch and timbre sensitivity scores were taken across three visits: (1) baseline, (2) after 1 month of intervention, and (3) after 1 month of control. We found that performance improved in pitch discrimination among CI users and NH listeners, with both online music training and audiobook listening. Music training, however, provided slightly greater benefit for instrument identification than audiobook listening. For both tasks, this improvement appears to be related to both fast stimulus learning as well as procedural learning. In conclusion, auditory training (with either acute participation in an online music training program or audiobook listening) may improve performance on untrained tasks of pitch discrimination and timbre identification. These findings demonstrate a potential role for music training in perceptual auditory appraisal of complex stimuli. Furthermore, this study highlights the importance and the need for more tightly controlled training studies in order to accurately evaluate the impact of rehabilitation training protocols on auditory processing.
IMPORTANCE Cochlear implant users generally display poor pitch perception. Flat-panel computed tomography (FPCT) has recently emerged as a modality capable of localizing individual electrode contacts within the cochlea in vivo. Significant place-pitch mismatch between the clinical implant processing settings given to patients and the theoretical maps based on FPCT imaging has previously been noted. OBJECTIVE To assess whether place-pitch mismatch is associated with poor cochlear implant-mediated pitch perception through evaluation of an individualized, image-guided approach toward cochlear implant programming on speech and music perception among cochlear implant users. DESIGN, SETTING, AND PARTICIPANTS A prospective cohort study of 17 cochlear implant users with MED-EL electrode arrays was performed at a tertiary referral center. The study was conducted from June 2016 to July 2017. INTERVENTIONS Theoretical place-pitch maps using FPCT secondary reconstructions and 3-dimensional curved planar reformation software were developed. The clinical map settings (eg, strategy, rate, volume, frequency band range) were modified to keep factors constant between the 2 maps and minimize confounding. The acclimation period to the maps was 30 minutes. MAIN OUTCOMES AND MEASURES Participants performed speech perception tasks (eg, consonant-nucleus-consonant, Bamford-Kowal-Bench Speech-in-Noise, vowel identification) and a pitch-scaling task while using the image-guided place-pitch map (intervention) and the modified clinical map (control). Performance scores between the 2 interventions were measured. RESULTS Of the 17 participants, 10 (58.8%) were women; mean (SD) was 59 (11.3) years. A significant median increase in pitch scaling accuracy was noted when using the experimental map compared with the control map (4 more correct answers; 95% CI, 0-8). Specifically, the number of pitch-scaling reversals for notes spaced at 1.65 semitones or greater decreased when an image-based approach to cochlear implant programming was used vs the modified clinical map (4 mistakes; 95% CI, 0.5-7). Although there was no observable median improvement in speech perception during use of an image-based map, the acute changes in frequency allocation and electrode channel deactivations used with the image-guided maps did not worsen consonant-nucleus-consonant (−1% correct phonemes, 95% CI, −2.5% to 6%) and Bamford-Kowal-Bench Speech-in-Noise (0.5-dB difference; 95% CI, −0.75 to 2.25 dB) median performance results relative to the clinical maps used by the patients. CONCLUSIONS AND RELEVANCE An image-based approach toward ochlear implant mapping may improve pitch perception outcomes by reducing place-pitch mismatch. Studies using a longer acclimation period with chronic stimulation over months may help assess the full range of the benefits associated with personalized image-guided cochlear implant mapping.
Despite the significant advances in language perception for cochlear implant (CI) recipients, music perception continues to be a major challenge for implant-mediated listening. Our understanding of the neural mechanisms that underlie successful implant listening remains limited. To our knowledge, this study represents the first neuroimaging investigation of music perception in CI users, with the hypothesis that CI subjects would demonstrate greater auditory cortical activation than normal hearing controls. H 2 15 O positron emission tomography (PET) was used here to assess auditory cortical activation patterns in ten postlingually deafened CI patients and ten normal hearing control subjects. Subjects were presented with language, melody, and rhythm tasks during scanning. Our results show significant auditory cortical activation in implant subjects in comparison to control subjects for language, melody, and rhythm. The greatest activity in CI users compared to controls was seen for language tasks, which is thought to reflect both implant and neural specializations for language processing. For musical stimuli, PET scanning revealed significantly greater activation during rhythm perception in CI subjects (compared to control subjects), and the least activation during melody perception, which was the most difficult task for CI users. These results may suggest a possible relationship between auditory performance and degree of auditory cortical activation in implant recipients that deserves further study.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
hi@scite.ai
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.