2006 IEEE International Conference on Multimedia and Expo 2006
DOI: 10.1109/icme.2006.262724
|View full text |Cite
|
Sign up to set email alerts
|

Musical Signal Type Discrimination based on Large Open Feature Sets

Abstract: Automatic discrimination of musical signal types as speech, singing, music, genres or drumbeats within audio streams is of great importance e.g. for radio broadcast stream segmentation. Yet, feature sets are largely discussed. We therefore suggest a large open feature set approach starting with systematical generation of 7k hi-level features based on MPEG-7 Low-Level-Descriptors and further feature contours. A subsequent fast Gain Ratio reduction followed by wrapper-based Floating Search leads to a strong basi… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1
1

Citation Types

0
11
0

Year Published

2008
2008
2019
2019

Publication Types

Select...
4
3
2

Relationship

1
8

Authors

Journals

citations
Cited by 13 publications
(11 citation statements)
references
References 8 publications
(13 reference statements)
0
11
0
Order By: Relevance
“…Columns two and three show the features and classifiers used in each work, whereas column four lists the percentage of the classification process. Schuller et al have also carried out an intensive work on music/speech discrimination [43][44][45][46][47]. Their method is based on Low-Level-Descriptors 7k + hi-level-features, and they have tested different types of music signals from different databases.…”
Section: Simulation Resultsmentioning
confidence: 99%
“…Columns two and three show the features and classifiers used in each work, whereas column four lists the percentage of the classification process. Schuller et al have also carried out an intensive work on music/speech discrimination [43][44][45][46][47]. Their method is based on Low-Level-Descriptors 7k + hi-level-features, and they have tested different types of music signals from different databases.…”
Section: Simulation Resultsmentioning
confidence: 99%
“…by Schuller, Wallhoff, Arsic, and Rigoll (2006), Tzanetakis et al (2001) or Tzanetakis (2002), achieving remarkable accuracy at particular genres.…”
Section: Genrementioning
confidence: 97%
“…These and similar features have been applied successfully to various audio classification tasks, e.g. musical genre recognition [32], emotion recognition [33,34], and classification of non-linguistic vocalisations [35]. In order to find a set of features highly relevant for laughter classification, an automatic data-driven feature selection method called correlation-based featuresubset selection (CFS) [36] is used.…”
Section: Automatic Classification Of Laughtermentioning
confidence: 99%