2013 IEEE Workshop on Applications of Signal Processing to Audio and Acoustics 2013
DOI: 10.1109/waspaa.2013.6701890
|View full text |Cite
|
Sign up to set email alerts
|

Recurrence quantification analysis features for environmental sound recognition

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
3
1
1

Citation Types

0
45
0

Year Published

2015
2015
2024
2024

Publication Types

Select...
6
3
1

Relationship

1
9

Authors

Journals

citations
Cited by 43 publications
(45 citation statements)
references
References 8 publications
0
45
0
Order By: Relevance
“…Further, most of the leading results were obtained by those who captured medium-range temporal information in the features used for classification. Four of the five highest-scoring systems did this: Roma et al [56] captured temporal repetition and similarity using "recurrence quantification analysis"; Rakotomamonjy and Gasso [55] used gradient features from image-processing; Geiger et al [48] extracted features from linear regression over time; Chum et al [46] trained a HMM. Each of these is a generic statistical model for temporal evolution, whose fitted parameters can then be used as features for classification.…”
Section: A Sc Resultsmentioning
confidence: 99%
“…Further, most of the leading results were obtained by those who captured medium-range temporal information in the features used for classification. Four of the five highest-scoring systems did this: Roma et al [56] captured temporal repetition and similarity using "recurrence quantification analysis"; Rakotomamonjy and Gasso [55] used gradient features from image-processing; Geiger et al [48] extracted features from linear regression over time; Chum et al [46] trained a HMM. Each of these is a generic statistical model for temporal evolution, whose fitted parameters can then be used as features for classification.…”
Section: A Sc Resultsmentioning
confidence: 99%
“…Method CPS [59] Segmentation -Likelihood ratio test classification DHV [60], [61] MFCCs (features) -HMMs (detection) GVV [62], [63] NMF (detection) -HMMs (postprocessing) NVM [64], [65] Hierarchical HMMs + Random Forests (classification) NR [66], [67] MFCCs (features) -SVMs (classification) SCS [68], [69] Gabor filterbank (features) -HMMs (classification) VVK [62], [70] MFCCs (features) -GMMs (detection) Baseline [14] NMF with learned bases (detection) Test set and the simulated sets QMUL Instance and QMUL Abstract. The baseline, CPS, GVV and SCS systems performed equivalently across the 2 datasets.…”
Section: Systemmentioning
confidence: 99%
“…Features extracted from time-frequency decompositions based on matching pursuit have also been evaluated [8], [9]. Among hand-crafted features that have shown some successes in providing discriminative information about audio scenes, we can also mention recurrence quantitative analysis (RQA) [10] which aims at capturing some recurring patterns in MFCC representations. Since, in most cases, the first step when classifying acoustic scenes is to compute a 2D time-frequency representation, some works have investigated features that are typically used in computer vision such as histogram of gradient (HOG) [2], [11], local binary pattern [12] or texturebased features [13], [14].…”
Section: Introductionmentioning
confidence: 99%