2015
DOI: 10.1007/s00530-015-0489-y
|View full text |Cite
|
Sign up to set email alerts
|

Feature selection and feature learning in arousal dimension of music emotion by using shrinkage methods

Abstract: low-SONE), root mean square and loudness-flux. Moreover, the shrinkage methods apply in logistic regression perform better for classification than most of other methods. We get an average accuracy rate of 83.8 %.

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1

Citation Types

0
3
0

Year Published

2017
2017
2024
2024

Publication Types

Select...
5
1
1

Relationship

0
7

Authors

Journals

citations
Cited by 12 publications
(3 citation statements)
references
References 26 publications
0
3
0
Order By: Relevance
“…Audio feature extraction was performed with openS-MILE [46], a popular open-source library for audio feature extraction. Specifically, we used the "emobase" configuration file to extract a set of 988 low-level descriptors (LLDs) including MFCC, pitch, spectral, zero-crossing rate, loudness and intensity statistics, many of which have been shown to be effective for identifying emotion in music [38], [39], [47], [48]. Many other configurations are available in openSMILE but we provide the "emobase" set of acoustic features since it is well-documented and was designed for emotion recognition applications [49].…”
Section: Feature Extractionmentioning
confidence: 99%
“…Audio feature extraction was performed with openS-MILE [46], a popular open-source library for audio feature extraction. Specifically, we used the "emobase" configuration file to extract a set of 988 low-level descriptors (LLDs) including MFCC, pitch, spectral, zero-crossing rate, loudness and intensity statistics, many of which have been shown to be effective for identifying emotion in music [38], [39], [47], [48]. Many other configurations are available in openSMILE but we provide the "emobase" set of acoustic features since it is well-documented and was designed for emotion recognition applications [49].…”
Section: Feature Extractionmentioning
confidence: 99%
“…But for the researchers, it is challenging to get a strong idea about the process of MER because of the diversity and the complexity of the research systems. Hence, it is essential to compare and classify the techniques on the basis of text dependent and non-text dependent features utilised for the MER [13,14]. Some common issues arise when mapping the relationship between the emotions and music features.…”
Section: Introductionmentioning
confidence: 99%
“…In the music emotion classification research of audio, Hwang et al [2] extracted 37 features to represent music samples, including rhythm, dynamics, and pitch, and utilized K-nearest neighbor classifier to output the results. Zhang et al [3] extracted 8 kinds of acoustic features to represent the arousal dimension in the 2D music emotion model and applied logistic regression methods to explain these features, but this research mainly focused on the 1D emotion and did not verify the specific emotion category. Ramani and Priya [4] extracted Mel frequency, spacing, and zero-crossing rate as separate representations of the best-fit ratio and used genetic algorithm as the classification technique.…”
Section: Introductionmentioning
confidence: 99%