2020
DOI: 10.1002/brb3.1936
|View full text |Cite
|
Sign up to set email alerts
|

Correspondence of categorical and feature‐based representations of music in the human brain

Abstract: Introduction Humans tend to categorize auditory stimuli into discrete classes, such as animal species, language, musical instrument, and music genre. Of these, music genre is a frequently used dimension of human music preference and is determined based on the categorization of complex auditory stimuli. Neuroimaging studies have reported that the superior temporal gyrus (STG) is involved in response to general music‐related features. However, there is considerable uncertainty over how discrete musi… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

1
23
0

Year Published

2021
2021
2023
2023

Publication Types

Select...
5
2
1

Relationship

3
5

Authors

Journals

citations
Cited by 18 publications
(24 citation statements)
references
References 45 publications
1
23
0
Order By: Relevance
“…For the auditory regressor, we used a modulation transfer function (MTF) model ( Nakai et al. 2021 ).…”
Section: Methodsmentioning
confidence: 99%
“…For the auditory regressor, we used a modulation transfer function (MTF) model ( Nakai et al. 2021 ).…”
Section: Methodsmentioning
confidence: 99%
“…First, in our feature regression analysis, the acoustic features we selected may not represent the full range of acoustic dynamics occurring throughout each excerpt. Previous studies using encoding models to examine brain activity evoked by music employed a range of acoustic features, such as the modulation transfer function (Norman-Haignere et al, 2015;Patil et al, 2012) as well as music-related models representing mode, roughness, root mean square energy (RMS), and pulse clarity (Alluri et al, 2012;Nakai et al, 2021;Toiviainen et al, 2014). However, the types of information captured by these features are also roughly captured by the features used in this study.…”
Section: Discussionmentioning
confidence: 99%
“…We obtained 3 hours of neuroimaging data for each participant, allowing researchers to construct participant-wise modeling of brain activity. Although this dataset has been used in two previous studies [2 , 3] , there remains much room to apply different acoustic models. The original music stimuli have been widely used in previous music information retrieval studies (see [4] for a review).…”
Section: Data Descriptionmentioning
confidence: 99%
“…Test samples were measured while the sequence of test music clips was presented four times. Modified from [3] . …”
Section: Data Descriptionmentioning
confidence: 99%