Proceedings of the International Conference on Multimedia Information Retrieval 2010
DOI: 10.1145/1743384.1743431
|View full text |Cite
|
Sign up to set email alerts
|

Feature selection for content-based, time-varying musical emotion regression

Abstract: In developing automated systems to recognize the emotional content of music, we are faced with a problem spanning two disparate domains: the space of human emotions and the acoustic signal of music. To address this problem, we must develop models for both data collected from humans describing their perceptions of musical mood and quantitative features derived from the audio signal. In previous work, we have presented a collaborative game, MoodSwings, which records dynamic (per-second) mood ratings from multipl… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

2
54
0

Year Published

2012
2012
2021
2021

Publication Types

Select...
4
3
1

Relationship

1
7

Authors

Journals

citations
Cited by 82 publications
(56 citation statements)
references
References 14 publications
2
54
0
Order By: Relevance
“…One hour of music was analyzed by human experts to mark A-V values, and then passed into the system [7]. This average error of the system was less than 15% of the total space when taking arousal and valence together, and less than 12% treating them separately.…”
Section: B Mood Identificationmentioning
confidence: 99%
See 1 more Smart Citation
“…One hour of music was analyzed by human experts to mark A-V values, and then passed into the system [7]. This average error of the system was less than 15% of the total space when taking arousal and valence together, and less than 12% treating them separately.…”
Section: B Mood Identificationmentioning
confidence: 99%
“…2). We have also obtained annotations of the arousal and valence (A-V) values in audio from human users, as this ground-truth data is invaluable in training emotion recognition systems [7]. Finally, our group has developed several methods for automatic music emotion recognition.…”
Section: Related Workmentioning
confidence: 99%
“…For each music segment in each frame, the MER system was trained to detect the emotion type in each segment. In contrast, an MER system is able to assume that music emotion is continuously changing according to time [10]. This approach expresses the emotional content of a music clip as a function of time-varying musical features [5].…”
Section: Recognizing Music Emotionmentioning
confidence: 99%
“…This study was performed with ordinary people from several cultural backgrounds based on adjectives that were selected by subjects. As shown in Figure 1(a), Hevner summarized the words used by the subjects to an impressive 66 adjectives, and these words were further extended [10]. Because of its intuitive representation, there are a series of studies using these emotional adjectives to represent music emotions [3].…”
Section: Introductionmentioning
confidence: 99%
“…General approaches have concentrated on acoustic features representing the musical mood and criteria for the classification of moods [19][20][21]. A recent study focused on a context-based approach that uses contextual information such as websites, tags, and lyrics [22].…”
Section: Musical Mood Recognitionmentioning
confidence: 99%