Death Metal music with violent themes is characterized by vocalizations with unnaturally low fundamental frequencies and high levels of distortion and roughness. These attributes decrease the signal to noise ratio, rendering linguistic content difficult to understand and leaving the impression of growling, screaming, or other non-linguistic vocalizations associated with aggression and fear. Here, we compared the ability of fans and non-fans of Death Metal to accurately perceive sung words extracted from Death Metal music. We also examined whether music training confers an additional benefit to intelligibility. In a 2 × 2 between-subjects factorial design (fans/non-fans, musicians/nonmusicians), four groups of participants (n = 16 per group) were presented with 24 sung words (one per trial), extracted from the popular American Death Metal band Cannibal Corpse. On each trial, participants completed a four-alternative forced-choice word recognition task. Intelligibility (word recognition accuracy) was above chance for all groups and was significantly enhanced for fans (65.88%) relative to non-fans (51.04%). In the fan group, intelligibility between musicians and nonmusicians was statistically similar. In the non-fan group, intelligibility was significantly greater for musicians relative to nonmusicians. Results are discussed in the context of perceptual learning and the benefits of expertise for decoding linguistic information in sub-optimum acoustic conditions.
Human cortical activity measured with magnetoencephalography (MEG) has been shown to track the temporal regularity of linguistic information in connected speech. In the current study, we investigate the underlying neural sources of these responses and test the hypothesis that they can be directly modulated by changes in speech intelligibility. MEG responses were measured to natural and spectrally degraded (noise-vocoded) speech in 19 normal hearing participants. Results showed that cortical coherence to “abstract” linguistic units with no accompanying acoustic cues (phrases and sentences) were lateralized to the left hemisphere and changed parametrically with intelligibility of speech. In contrast, responses coherent to words/syllables accompanied by acoustic onsets were bilateral and insensitive to intelligibility changes. This dissociation suggests that cerebral responses to linguistic information are directly affected by intelligibility but also powerfully shaped by physical cues in speech. This explains why previous studies have reported widely inconsistent effects of speech intelligibility on cortical entrainment and, within a single experiment, provided clear support for conclusions about language lateralization derived from a large number of separately conducted neuroimaging studies. Since noise-vocoded speech resembles the signals provided by a cochlear implant device, the current methodology has potential clinical utility for assessment of cochlear implant performance.
This response argues against the proposal that novel utterances are formed by analogy with stored exemplars that are close in meaning. Strings of words that are similar in meaning or even identical can behave very differently once inserted into different syntactic environments. Furthermore, phrases with similar meanings but different underlying syntactic structures can give rise to disparities in meaning in particular contexts. In short, analogy falls short of explaining both adult knowledge of language and children’s syntactic development.
Providing a plausible neural substrate of speech processing and language comprehension, cortical activity has been shown to track different levels of linguistic structure in connected speech (syllables, phrases and sentences), independent of the physical regularities of the acoustic stimulus. In the current study, we investigated the effect of speech intelligibility on this brain activity as well as the underlying neural sources. Using magnetoencephalography (MEG), brain responses to natural speech and noise-vocoded (spectrally-degraded) speech in nineteen normal hearing participants were measured. Results showed that cortical MEG coherence to linguistic structure changed parametrically with the intelligibility of the speech signal. Cortical responses coherent with phrase and sentence structures were lefthemisphere lateralized, whereas responses coherent to syllable/word structure were bilateral. The enhancement of coherence to intelligible compared to unintelligible speech was also left lateralized and localized to the parasylvian cortex. These results demonstrate that cortical responses to higher level linguistics structures (phrase and sentence level) are sensitive to speech intelligibility. Since the noise-vocoded sentences simulate the auditory input provided by a cochlear implant, such objective neurophysiological measures have potential clinical utility for assessment of cochlear implant performance.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
hi@scite.ai
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.