2021
DOI: 10.3390/electronics10151769
|View full text |Cite
|
Sign up to set email alerts
|

A Novel Emotion-Aware Hybrid Music Recommendation Method Using Deep Neural Network

Abstract: Emotion-aware music recommendations has gained increasing attention in recent years, as music comes with the ability to regulate human emotions. Exploiting emotional information has the potential to improve recommendation performances. However, conventional studies identified emotion as discrete representations, and could not predict users’ emotional states at time points when no user activity data exists, let alone the awareness of the influences posed by social events. In this study, we proposed an emotion-a… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1

Citation Types

0
4
0

Year Published

2022
2022
2024
2024

Publication Types

Select...
6
2

Relationship

0
8

Authors

Journals

citations
Cited by 13 publications
(4 citation statements)
references
References 80 publications
0
4
0
Order By: Relevance
“…DL methods have also been applied to music recommendation, such as using multi-layer perceptrons (MLP) and RNN. These methods effectively capture the complex patterns in the user-item interactions and have achieved state-of-the-art performance on music recommendation benchmarks (Lin et al, 2019;Schedl, 2019;Dang et al, 2021;Wang et al, 2021;Singh et al, 2022). In addition, Schindler et al (2012) proposed to use metadata information from songs to improve the accuracy of music recommendations and Kim et al (2020) used DL for music recommendation by using user listening history and audio features of songs as input.…”
Section: Backgroundmentioning
confidence: 99%
“…DL methods have also been applied to music recommendation, such as using multi-layer perceptrons (MLP) and RNN. These methods effectively capture the complex patterns in the user-item interactions and have achieved state-of-the-art performance on music recommendation benchmarks (Lin et al, 2019;Schedl, 2019;Dang et al, 2021;Wang et al, 2021;Singh et al, 2022). In addition, Schindler et al (2012) proposed to use metadata information from songs to improve the accuracy of music recommendations and Kim et al (2020) used DL for music recommendation by using user listening history and audio features of songs as input.…”
Section: Backgroundmentioning
confidence: 99%
“…For example, during the COVID-19 pandemic, researchers have formulated a better way to recommend music based on users' emotions by reading users' facial expressions or obtaining direct input from them [109]. Studies have also shown that emotion-aware algorithms can utilize the information provided by COVID-19 cases and outperform baseline algorithms [112]. Typically, contextaware music recommendation systems are concerned with studying users' musical listening behaviors in the context of the real-world and utilizing that information as input for the system.…”
Section: Listening To Music Within the Context Of Covid-19mentioning
confidence: 99%
“…At the same time, with the advent of multi-aspect temporal-textual embedding in author linking [14], many recommendation systems based on text emotion have also emerged. In particular, it should be noted that some deep learning models, such as context-aware models [15] and EmoDNN [16], make deep learning-based emotion understanding more accurate, which further improves the personalization of the recommendation system. In 2023, Baqach, A et al proposed a new method called CLAS (CNN-LSTM-Attention-SVM) based on CNNs, LSTMs, Attention mechanisms, and support vector machines [17].…”
Section: Introductionmentioning
confidence: 99%