2020
DOI: 10.1007/s00779-020-01393-4
|View full text |Cite
|
Sign up to set email alerts
|

Emotional classification of music using neural networks with the MediaEval dataset

Abstract: The proven ability of music to transmit emotions provokes the increasing interest in the development of new algorithms for music emotion recognition (MER). In this work, we present an automatic system of emotional classification of music by implementing a neural network. This work is based on a previous implementation of a dimensional emotional prediction system in which a Multilayer Perceptron (MLP) was trained with the freely available MediaEval database. Although these previous results are good in terms of … Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1
1

Citation Types

0
6
0

Year Published

2021
2021
2024
2024

Publication Types

Select...
6
2
1

Relationship

0
9

Authors

Journals

citations
Cited by 13 publications
(6 citation statements)
references
References 32 publications
(26 reference statements)
0
6
0
Order By: Relevance
“…Recently, there have been a body of works that applied deep neural network models to capture the association of mood/emotion and song by taking advantage of audio features (Saari et al 2013;Panda 2019;Korzeniowski et al 2020;Panda, Malheiro, and Paiva 2020;Medina, Beltrán, and Baldassarri 2020), lyrics features (Fell et al 2019;Hrustanović, Kavšek, and Tkalčič 2021) as well as both lyrics and audio (Delbouys et al 2018;Parisi et al 2019;Wang, Syu, and Wongchaisuwat 2021;Bhattacharya and Kadambari 2018) features. Delbouys et al classify mood of a song to either 'arousal' or 'valence' by utilizing a 100-dimensional word2vec embedding vector that is trained on 1.6 million lyrics in several different neural architectures such as GRU, LSTM, Convolutional Networks for their lyrics-based model.…”
Section: Prediction Of Mood With Lyrics and Acousticsmentioning
confidence: 99%
“…Recently, there have been a body of works that applied deep neural network models to capture the association of mood/emotion and song by taking advantage of audio features (Saari et al 2013;Panda 2019;Korzeniowski et al 2020;Panda, Malheiro, and Paiva 2020;Medina, Beltrán, and Baldassarri 2020), lyrics features (Fell et al 2019;Hrustanović, Kavšek, and Tkalčič 2021) as well as both lyrics and audio (Delbouys et al 2018;Parisi et al 2019;Wang, Syu, and Wongchaisuwat 2021;Bhattacharya and Kadambari 2018) features. Delbouys et al classify mood of a song to either 'arousal' or 'valence' by utilizing a 100-dimensional word2vec embedding vector that is trained on 1.6 million lyrics in several different neural architectures such as GRU, LSTM, Convolutional Networks for their lyrics-based model.…”
Section: Prediction Of Mood With Lyrics and Acousticsmentioning
confidence: 99%
“…MER, a field investigating computational models for automatically recognizing the perceptual emotion of music, has made great progress in recent decades [91]- [94]. Kim et al noted that MER usually constitutes a process of extracting music features from original music, forming the relations between music features and perceived emotions, and predicting the emotion of untagged music [95].…”
Section: From Music Emotion Recognition To Music Preference Predictionmentioning
confidence: 99%
“…Medina [13] developed an emotional classification of music using neural networks with the MediaEval dataset resulted the values of valence and arousal showed imbalanced classification results. However, the required characteristics of the dataset that are: size, class balance and quality of the annotations, still needed more improvement for achieving good performance in terms of accuracy.…”
Section: Literature Reviewmentioning
confidence: 99%