1994
DOI: 10.1080/09298219408570648
|View full text |Cite
|
Sign up to set email alerts
|

Auditory modelling and self‐organizing neural networks for timbre classification

Abstract: A timbre classification system based on auditory processing and Kohonen self organizing neural networks is described. Preliminary results are given on a simple classification experiment involving 12 instruments in both clean and degraded conditions.

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

1
14
0

Year Published

2001
2001
2015
2015

Publication Types

Select...
5
3
1

Relationship

0
9

Authors

Journals

citations
Cited by 42 publications
(15 citation statements)
references
References 17 publications
1
14
0
Order By: Relevance
“…4 A pilot study indeed suggested that this Cartesian hypothesis may provide an appropriate methodology for developing prediction models of affect description. 2 A number of computational models are nowadays available that extract perception-related properties from musical audio such as onset (e.g., Klapuri, 1999;Smith, 1994), beat (e.g., Toiviainen, 2001;Large & Kolen, 1994;Scheirer, 1998;Laroche, 2003), consonance (e.g., Aures, 1985;Daniel & Weber, 1997;Leman, 2000a), pitch (e.g., Clarisse et al, 2002;De Mulder et al, 2004), harmony, tonality (e.g., Terhardt, 1974;Parncutt, 1989, Leman, 1995, 2000b, timbre (e.g., Cosi et al, 1994;Toiviainen, 1996;De Poli & Prandoni, 1997). 3 Apart from a preliminary study by Scheirer et al (2000) , we know of no other attempts that relate these and similar audioextracted structural features to affect-based description of music.…”
Section: Introductionmentioning
confidence: 99%
“…4 A pilot study indeed suggested that this Cartesian hypothesis may provide an appropriate methodology for developing prediction models of affect description. 2 A number of computational models are nowadays available that extract perception-related properties from musical audio such as onset (e.g., Klapuri, 1999;Smith, 1994), beat (e.g., Toiviainen, 2001;Large & Kolen, 1994;Scheirer, 1998;Laroche, 2003), consonance (e.g., Aures, 1985;Daniel & Weber, 1997;Leman, 2000a), pitch (e.g., Clarisse et al, 2002;De Mulder et al, 2004), harmony, tonality (e.g., Terhardt, 1974;Parncutt, 1989, Leman, 1995, 2000b, timbre (e.g., Cosi et al, 1994;Toiviainen, 1996;De Poli & Prandoni, 1997). 3 Apart from a preliminary study by Scheirer et al (2000) , we know of no other attempts that relate these and similar audioextracted structural features to affect-based description of music.…”
Section: Introductionmentioning
confidence: 99%
“…20 The concept of a "timbre space" first suggested by Grey 15 was replicated within a neural network, and clustering was used to categorize sounds. A similar approach involving a selflearning neural network was adopted by Cosi et al, 21 capturing tone quality with MFCCs.…”
Section: Musical Instrument Classifiers Based On Timbral Consideramentioning
confidence: 99%
“…Strategy B-1 was based upon mean MFCCs evaluated over the whole tone and was inspired by a number of previous musical instrument classifiers. 21,26,30 It formed a reduced-space representation of the average timbre of the tone.…”
Section: Onset Fingerprinting Vs Whole Tone Mfccsmentioning
confidence: 99%
“…Cosi et al, 1994;Feiten and Günzel, 1994;Spevak and Polfreman, 2001;Frühwirth and Rauber, 2001). Alternatives include multi-dimensional scaling (Kruskal and Wish, 1978), Sammon's mapping (Sammon, 1969), and generative topographic mapping (Bishop et al, 1998).…”
Section: Self-organizing Mapsmentioning
confidence: 99%