Proceedings of the 2017 International Conference on Information Technology 2017
DOI: 10.1145/3176653.3176709
|View full text |Cite
|
Sign up to set email alerts
|

Acoustic Features for Music Emotion Recognition and System Building

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1

Citation Types

0
2
0

Year Published

2018
2018
2020
2020

Publication Types

Select...
1
1

Relationship

1
1

Authors

Journals

citations
Cited by 2 publications
(2 citation statements)
references
References 18 publications
0
2
0
Order By: Relevance
“…Most music datasets do not include audio files because of intellectual property concerns. Instead, the datasets provide emotional annotations, and lists of songs and where to find them [29][30][31]. Some datasets include extracted features [32], and some datasets consider the cultural background of the annotators [28] [33].…”
Section: Datasetmentioning
confidence: 99%
See 1 more Smart Citation
“…Most music datasets do not include audio files because of intellectual property concerns. Instead, the datasets provide emotional annotations, and lists of songs and where to find them [29][30][31]. Some datasets include extracted features [32], and some datasets consider the cultural background of the annotators [28] [33].…”
Section: Datasetmentioning
confidence: 99%
“…The MIR toolbox relies on a built-in auditory toolbox and the Musical Instrument Digital Interface (MIDI) toolbox, which must be installed separately [40][41][42]. This tool was chosen because it can extract numerous features, including the five groups of features described below [29][43] [44].…”
Section: Feature Extractionmentioning
confidence: 99%