2020
DOI: 10.12785/ijcds/090308
|View full text |Cite
|
Sign up to set email alerts
|

A Survey on Autonomous Techniques for Music Classification based on Human Emotions Recognition

Abstract: Music is one of the finest element to trigger emotions in human beings. Each and every human being feels the music and emotions are automatically provoked by listening music. Music is considered as strong stress reliever. With the increase in size of music dataset available online and advancement of automation technologies the emotions from the music are to be recognized automatically so that the online database of music can be organized and browsed in an efficient manner. Automation of music emotion classific… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
2

Citation Types

0
4
0

Year Published

2021
2021
2022
2022

Publication Types

Select...
3
2

Relationship

0
5

Authors

Journals

citations
Cited by 5 publications
(4 citation statements)
references
References 67 publications
(148 reference statements)
0
4
0
Order By: Relevance
“…ey used the language processing technology to collect the music context characteristics and then called the long-term and short-term memory network model to train the real data set and to obtain the best experimental result characteristics. In [14], the authors highlighted the problems of the long music classification cycle and low accuracy, and establishing an optimized cyclic neural network model and adding an attention mechanism to the model can improve the classification accuracy. ey established a fusion model based on deep learning and collaborative filtering and used the improved neural network mining algorithm and automatic encoder to capture the hidden features in music and to better integrate the collaborative filtering model with the deep learning model.…”
Section: Related Workmentioning
confidence: 99%
“…ey used the language processing technology to collect the music context characteristics and then called the long-term and short-term memory network model to train the real data set and to obtain the best experimental result characteristics. In [14], the authors highlighted the problems of the long music classification cycle and low accuracy, and establishing an optimized cyclic neural network model and adding an attention mechanism to the model can improve the classification accuracy. ey established a fusion model based on deep learning and collaborative filtering and used the improved neural network mining algorithm and automatic encoder to capture the hidden features in music and to better integrate the collaborative filtering model with the deep learning model.…”
Section: Related Workmentioning
confidence: 99%
“…However, gesturing is not exclusive to hand movements; it can be done using face motion or even simple gazing. Therefore, researchers have attempted various approaches for controlling objects using symbolic hand gestures [22,38,45,52,54], deictic (pointing) hand gestures [23,25,34], eye gaze [13,24,27,31,36,37,50,51,53], and facial expressions [3,18,46]. Specifically for the automotive domain, in-vehicle interaction has been attempted using hand gestures [5,16,33,40], eye gaze [36], and facial expressions [44], while outside-the-vehicle interaction has been attempted using pointing gestures [17,41], eye gaze [24], and head pose [26,30].…”
Section: Related Workmentioning
confidence: 99%
“…Recent technique Deep Neural Networks (DNNs) are successful in several applications like speech and emotion recognition [19], music composition and classification [20], [21]. Utilizing DNNs for the separation of mixed audio signals started in 2014.…”
Section: Introductionmentioning
confidence: 99%