2004
DOI: 10.1007/s00500-004-0374-7
|View full text |Cite
|
Sign up to set email alerts
|

Tree-based versus distance-based key recognition in musical audio

Abstract: A tree-based method for the recognition of the tonal center or key in a musical audio signal is presented. Time-varying key feature vectors of 264 synthesized sounds are extracted from an auditory-based pitch model and converted into character strings using PCAanalysis and classification trees. The results are compared with distance-based methods. The characteristics of the new tonality analysis tool are illustrated on various examples. The potential of this method as a building stone in a music retrieval syst… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1

Citation Types

0
2
0

Year Published

2008
2008
2022
2022

Publication Types

Select...
3
1

Relationship

0
4

Authors

Journals

citations
Cited by 4 publications
(2 citation statements)
references
References 23 publications
(29 reference statements)
0
2
0
Order By: Relevance
“…Note that, psychoacoustically, supertonics do not represent physical oddballs, at least not with respect to pitch commonality or roughness. Moreover, the pitches of the supertonics correlated higher with the pitches in the r Herrojo Ruiz et al r r 1208 r preceding harmonic context than those of the tonic chords (calculated according to the echoic memory based model [Martens et al, 2005]; see also Koelsch et al [2007] for detailed correlations of local context (pitch image of the current chord) with global context (echoic memory representation as established by previously heard chords).…”
Section: Stimulimentioning
confidence: 95%
“…Note that, psychoacoustically, supertonics do not represent physical oddballs, at least not with respect to pitch commonality or roughness. Moreover, the pitches of the supertonics correlated higher with the pitches in the r Herrojo Ruiz et al r r 1208 r preceding harmonic context than those of the tonic chords (calculated according to the echoic memory based model [Martens et al, 2005]; see also Koelsch et al [2007] for detailed correlations of local context (pitch image of the current chord) with global context (echoic memory representation as established by previously heard chords).…”
Section: Stimulimentioning
confidence: 95%
“…There were few researches devoted to estimate the tonality from audio signals [11], [12].The spectrum of the energy along the pitch is called chromagram. The Chromagram is obtained by converting from frequency domain to the pitch domain using a log-frequency transformation.…”
Section: A Tonality Analysismentioning
confidence: 99%