2017 IEEE International Conference on INnovations in Intelligent SysTems and Applications (INISTA) 2017
DOI: 10.1109/inista.2017.8001169
|View full text |Cite
|
Sign up to set email alerts
|

Music emotion analysis using semantic embedding recurrent neural networks

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1
1

Citation Types

0
11
0

Year Published

2018
2018
2023
2023

Publication Types

Select...
4
2
1

Relationship

1
6

Authors

Journals

citations
Cited by 11 publications
(11 citation statements)
references
References 13 publications
0
11
0
Order By: Relevance
“…Some reduce the length of music piece to reduce the time taken for database analysis process. Thayers [17], [61], [62], [48], [63], [19], [64], [65] Categorical [66] Russel's [18], [40], [67] 2-d [34] GEMS [68] Indian classical model [19] Hevner [64] Authors may also provide the list of adjective terms and their synonyms to categorize the songs to reduce the time consumption. Some example songs with defined categories may also be provided to the group for better judgment of the class belonging to particular song.…”
Section: B Dimensional Approachmentioning
confidence: 99%
“…Some reduce the length of music piece to reduce the time taken for database analysis process. Thayers [17], [61], [62], [48], [63], [19], [64], [65] Categorical [66] Russel's [18], [40], [67] 2-d [34] GEMS [68] Indian classical model [19] Hevner [64] Authors may also provide the list of adjective terms and their synonyms to categorize the songs to reduce the time consumption. Some example songs with defined categories may also be provided to the group for better judgment of the class belonging to particular song.…”
Section: B Dimensional Approachmentioning
confidence: 99%
“…For batch learning, which is currently the standard procedure for learning neural networks due to performance considerations, the matrices K F, F and K A, A can be calculated over batches instead of full dataset. We described this approach in less general terms in [19] as semantic embedding, borrowing the idea from the domain of text processing [26]. Semantic embedding in texts seeks to learn similarity between documents using pairs of similar and dissimilar files and could be considered a special case of the described idea (with cosine similarity as the k function and K A, A being built as a matrix of ones and zeroes from known relation of similarity, rather than calculated from annotations).…”
Section: Similarity-based Loss For a Neural Networkmentioning
confidence: 99%
“…(i) Cosine: the similarity notion that we used in the earlier paper [19], where we first tackled learning similarity. It was previously used in the approach to learning similarity between documents called semantic embedding.…”
Section: Measures Of Similarity Between Vectorsmentioning
confidence: 99%
See 2 more Smart Citations