Musicians and nonmusicians listened to musical phrases that were either selected from the classical repertoire or composed for the experiments. The phrases ended either congruously or with a nondiatonic, diatonic, or rhythmic violation. Percentage of correct responses was analyzed in Experiment 1, and event-related potentials (ERPs) were recorded in Experiments 2 and 3. Musicians performed better than nonmusicians in recognizing familiar musical phrases and classifying terminal notes. The differences found as a function of expertise were larger for unfamiliar than for familiar melodies. The ERPs to the end notes differed both in terms of amplitude and latency between musicians and nonmusicians, and as a function of participants' familiarity with the melodies and type of violation. Results show that expertise influences the decisional rather than the purely perceptual aspects of music processing and that ERPs can provide important insight into the study of music perception.
Why is vocal music the oldest and still the most popular form of music? Very possibly because vocal music involves an intimate combination of speech and music, two of the most specific, high-level skills of human beings. The issue we address is whether people listening to a song treat the linguistic and musical components separately or integrate them within a single percept. Event-related potentials were recorded while musicians listened to excerpts from operas sung a capella. Excerpts were ended by semantically congruous or incongruous words sung either in or out of key. Results clearly demonstrated the independence of lyrics and tunes, so that an additive model of semantic-and harmonic-violations processing predicted the data extremely well. These results are consistent with a modular organization of the human cognitive system and open new perspectives in the search for the similarities and differences between language and music processing.
Excerpts from French operatic songs were used to evaluate the extent to which language and music compete for processing resources. Do these two dimensions conflict? Are they integrated into a single percept? Or are they independent? The final word of each excerpt was either semantically congruous or incongruous relative to the prior linguistic context and was sung either in or out of key. Participants were asked to detect either the semantic or the melodic incongruity (single task) or both (dual task). We predicted a dual-task deficit if these tasks conflicted and no deficit if they were either independent or integrated. In order to distinguish between these last two outcomes, trial-by-trial contingency analyses were also computed, predicting no correlation if the tasks were conflicting or independent, a positive correlation under the assumption of integration and a negative correlation if dividing attention is impossible. Our results show similar levels of performance in single and dual tasks and no correlation between dual-task judgments, thus suggesting that semantic and melodic aspects of language are processed by independent systems. In addition, a comparison between musicians and nonmusicians shows that these conclusions are independent of musical expertise.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.