2011
DOI: 10.1177/0305735610391347
|View full text |Cite
|
Sign up to set email alerts
|

Genre identification of very brief musical excerpts

Abstract: The purpose of this study was to examine how well individuals were able to identify different music genres from very brief excerpts and whether musical training, gender and preference played a role in genre identification. Listeners were asked to identify genre from classical, jazz, country, metal, and rap/hip hop excerpts that were 125, 250, 500, or 1000 ms in length. Participants (N = 347), students recruited from three college campuses in the southeast region of the USA, were found to be quite successful in… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

1
26
0

Year Published

2013
2013
2023
2023

Publication Types

Select...
4
1

Relationship

0
5

Authors

Journals

citations
Cited by 21 publications
(27 citation statements)
references
References 37 publications
1
26
0
Order By: Relevance
“…Schedl et al [374] provide a URL for obtaining the list of the artists in their dataset, but the resource no longer exists. Mace et al [258] also provide a list, but since they only list the song and artist name uncertainty arises, e.g., which recording of "The Unanswered Question" by Ives do they use? It is impossible to recreate the dataset used in [48, 49] since they only state that they assemble 850 audio examples in 17 different genres.…”
Section: Datasetsmentioning
confidence: 99%
See 1 more Smart Citation
“…Schedl et al [374] provide a URL for obtaining the list of the artists in their dataset, but the resource no longer exists. Mace et al [258] also provide a list, but since they only list the song and artist name uncertainty arises, e.g., which recording of "The Unanswered Question" by Ives do they use? It is impossible to recreate the dataset used in [48, 49] since they only state that they assemble 850 audio examples in 17 different genres.…”
Section: Datasetsmentioning
confidence: 99%
“…While we consider both "genre" and "style," and make no attempt to differentiate them, we do not include "mood" or "emotion," e.g., [471]. We are herein interested only in the ways systems for MGR are evaluated, be they algorithms, humans [79, 169,201,258,261,262,278,290,366,367,370,381,383,460], pigeons [347], sparrows [439,440], koi [58], primates [278] or rats [317]. To facilitate this survey, we created a spreadsheet summarizing every relevant paper we found in terms of its experimental design, details of the datasets it uses, and the figures of merit it reports.…”
Section: Introductionmentioning
confidence: 99%
“…The work of Gjerdingen and Perrott (2008) is a widely cited study of human music genre classification (Aucouturier and Pampalk 2008), and Krumhansl (2010) and Mace et al (2011) extend this work. Both Ahrendt (2006) and Meng and Shawe-Taylor (2008) use listening tests to gauge the difficulty of discriminating the genres of their datasets, and to compare with the performance of their system.…”
Section: Evaluating Behavior By Listening Testsmentioning
confidence: 54%
“…This claim and its origins are mysterious because nothing about MGR-the problem of identifying, discriminating between, and learning the criteria of music genres or styles-naturally restricts the number of genre labels people use to describe a piece of music. Perhaps this imagined limitation of MGR comes from the fact that of 435 works with an experimental component we survey (Sturm 2012a), we find only ten that use a multilabel approach (Barbedo and Lopes 2008;Lukashevich et al 2009;Mace et al 2011;McKay 2004;Sanden 2010;Sanden and Zhang 2011a, b;Scaringella et al 2006;Tacchini and Damiani 2011;Wang et al 2009). Perhaps it comes from the fact that most of the private and public datasets so far used in MGR assume a model of one genre per musical excerpt (Sturm 2012a).…”
Section: Argumentsmentioning
confidence: 96%
See 1 more Smart Citation