The perception of two (or more) simultaneous musical notes, depending on their pitch interval(s), could be broadly categorized as consonant or dissonant. Previous studies have suggested that musicians and non-musicians adopt different strategies when discerning music intervals: the frequency ratio (perfect fifth or tritone) for the former, and frequency differences (e.g., roughness vs. non-roughness) for the latter. To extend and replicate this previous finding, in this follow-up study we reran the ElectroEncephaloGraphy (EEG) experiment, and separately collected functional magnetic resonance imaging (fMRI) data of the same protocol. The behavioral results replicated our previous findings that musicians used pitch intervals and nonmusicians roughness for consonant judgments. And the ERP amplitude differences between groups in both frequency ratio and frequency differences were primarily around N1 and P2 periods along the midline channels. The fMRI results, with the joint analyses by univariate, multivariate, and connectivity approaches, further reinforce the involvement of midline and related-brain regions in consonant/dissonance judgments. Additional representational similarity analysis (or RSA), and the final spatio-temporal searchlight RSA (or ss-RSA), jointly combined the fMRI-EEG into the same representational space, providing final support on the neural substrates of neurophysiological signatures. Together, these analyses not just exemplify the importance of replication, that musicians rely more on top-down knowledge for consonance/dissonance perception; but also demonstrate the advantages of multiple analyses in constraining the findings from both EEG and fMRI.
The perception of two (or more) simultaneous musical notes, depending on their pitch interval(s), could be broadly categorized as consonant or dissonant. Previous literature has suggested that musicians and non-musicians adopt different strategies when discerning music intervals: musicians rely on the frequency ratio (defined as the relative ratio between the two fundamental frequencies, such as 3:2 (C-G), perfect fifth/consonant; vs. the mixtures of semitones, as 45:32 (C-#F), tritone/dissonant) for the musicians; and non-musicians on frequency differences (e.g., the presences of beats, perceived as rough), and their separate ERP differences in N1(~160ms) and P2(~250ms) along the midline electrodes. To replicate and extend, in this study we reran the previous experiment, and separately collected fMRI data of the same protocol (with sparse sampling modifications). The behavioral and EEG results to a large extent corresponded to our previous finding. The fMRI results, with the joint analyses by univariate, psycho-physiological interaction, multi-voxel pattern analysis, and representational similarity analysis (RSA) approaches, further reinforce the involvement of midline and related-brain regions in consonant/dissonance judgments. The final spatio-temporal searchlight RSA provided convincing evidence that medial prefrontal cortex, along with bilateral superior temporal cortex, as the joint locus of midline N1, and dorsal anterior cingulate cortex for the P2 effect (for musicians). Together, these analyses not just reaffirm that musicians rely more on top-down knowledge for consonance/dissonance perception; but also demonstrate the advantages of multiple analyses in constraining the findings from both EEG and fMRI.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
hi@scite.ai
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.