2022
DOI: 10.3390/s23010382
|View full text |Cite
|
Sign up to set email alerts
|

MERP: A Music Dataset with Emotion Ratings and Raters’ Profile Information

Abstract: Music is capable of conveying many emotions. The level and type of emotion of the music perceived by a listener, however, is highly subjective. In this study, we present the Music Emotion Recognition with Profile information dataset (MERP). This database was collected through Amazon Mechanical Turk (MTurk) and features dynamical valence and arousal ratings of 54 selected full-length songs. The dataset contains music features, as well as user profile information of the annotators. The songs were selected from t… Show more

Help me understand this report
View preprint versions

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

0
4
0

Year Published

2023
2023
2024
2024

Publication Types

Select...
5
2
2

Relationship

1
8

Authors

Journals

citations
Cited by 13 publications
(8 citation statements)
references
References 77 publications
0
4
0
Order By: Relevance
“…As shown, both the dimensional (Belfi & Kacirek, 2021 ; Koh et al, 2022 ; Li et al, 2012 ; Lepping et al, 2016 ; Imbir & Gołąb, 2017 ; Rainsford et al, 2018 ) and discrete (Hill and Palmer, 2010 ; Xu et al, 2017 ; Vieillard et al, 2008 ) emotion models have been employed to categorize music emotion, yet only a few studies have employed both models for music classification (Eerola & Vuoskoski, 2011 ; Xie & Gao, 2022 ). Most of the datasets use Western music and pop music (Belfi & Kacirek, 2021 ; Eerola & Vuoskoski, 2011 ; Imbir & Gołąb, 2017 ; Koh et al, 2022 ; Li et al, 2012 ; Lepping et al, 2016 ), and only three studies utilize Chinese traditional instrumental music (Li et al, 2012 ; Xie & Gao, 2022 ; Xu et al, 2017 ). Li et al ( 2012 ) and Xu et al ( 2017 ) constructed emotional music datasets containing Chinese traditional instrumental music.…”
Section: Introductionmentioning
confidence: 81%
“…As shown, both the dimensional (Belfi & Kacirek, 2021 ; Koh et al, 2022 ; Li et al, 2012 ; Lepping et al, 2016 ; Imbir & Gołąb, 2017 ; Rainsford et al, 2018 ) and discrete (Hill and Palmer, 2010 ; Xu et al, 2017 ; Vieillard et al, 2008 ) emotion models have been employed to categorize music emotion, yet only a few studies have employed both models for music classification (Eerola & Vuoskoski, 2011 ; Xie & Gao, 2022 ). Most of the datasets use Western music and pop music (Belfi & Kacirek, 2021 ; Eerola & Vuoskoski, 2011 ; Imbir & Gołąb, 2017 ; Koh et al, 2022 ; Li et al, 2012 ; Lepping et al, 2016 ), and only three studies utilize Chinese traditional instrumental music (Li et al, 2012 ; Xie & Gao, 2022 ; Xu et al, 2017 ). Li et al ( 2012 ) and Xu et al ( 2017 ) constructed emotional music datasets containing Chinese traditional instrumental music.…”
Section: Introductionmentioning
confidence: 81%
“…Multimodal datasets used in emotion-centric tasks, such as CAL500 (38), and AMC (39), combine audio features with emotion annotations. Additional datasets, including those from (40)(41)(42)(43)(44)(45), incorporate labels, lyrics, and participant information. Integrating lyrics with audio data provides additional context, enhancing emotion recognition accuracy.…”
Section: Emotion/affect Recognitionmentioning
confidence: 99%
“…Multimodal music datasets usually contain personal information, like physiological measurements (24) or users' profile information (25). The protection of this data and the handling of it securely and confidentially belongs to the category of data privacy.…”
Section: Privacymentioning
confidence: 99%