2009 International Conference on Computational Science and Engineering 2009
DOI: 10.1109/cse.2009.184
|View full text |Cite
|
Sign up to set email alerts
|

Reinforcement Learning of Listener Response for Mood Classification of Audio

Abstract: This paper describes a method of applying a reinforcement learning artificial intelligence to categorize audio files by mood based on listener response during a performance. The system discussed is implemented in a performance art environment designed to present the moods of multiple participants simultaneously in a room via a diffusion of representative audio samples.

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1

Citation Types

0
3
0

Year Published

2010
2010
2021
2021

Publication Types

Select...
3
3

Relationship

0
6

Authors

Journals

citations
Cited by 6 publications
(3 citation statements)
references
References 12 publications
0
3
0
Order By: Relevance
“…The users can provide feedback on the system recommended audio by answering the question "Does this audio match the mood you set?" [18]. Here, the key focus is to learn the mapping of a song to the selected mood, however, in this article, we focus on the automatic determination of the emotion.…”
Section: Related Work and Backgroundmentioning
confidence: 99%
“…The users can provide feedback on the system recommended audio by answering the question "Does this audio match the mood you set?" [18]. Here, the key focus is to learn the mapping of a song to the selected mood, however, in this article, we focus on the automatic determination of the emotion.…”
Section: Related Work and Backgroundmentioning
confidence: 99%
“…We found very few studies in audio using RL/deep RL. In [14], the authors describe an avenue of using RL to classify audio files into several mood classes depending upon listener response during a performance. In [15], the authors introduce the 'EmoRL' model that triggers an emotion classifier as soon as it gains enough confidence while listening to an emotional speech.…”
Section: Introductionmentioning
confidence: 99%
“…Users could compete among them or collaborate to win the game. Stokholm and Pasquier [28] implemented a system mixing audio representations of the mood of several users to increase collaboration and empathy. Vinyes and colleagues developed the Audio Explorer system, enabling users to concurrently modify the audio mixing of a piece of music downloaded from the Web and to share the resulting content [29].…”
Section: Eai Endorsed Transactions Onmentioning
confidence: 99%