2019
DOI: 10.1145/3363560
|View full text |Cite
|
Sign up to set email alerts
|

Affective Computing for Large-scale Heterogeneous Multimedia Data

Abstract: The wide popularity of digital photography and social networks has generated a rapidly growing volume of multimedia data (i.e., image, music, and video), resulting in a great demand for managing, retrieving, and understanding these data. Affective computing (AC) of these data can help to understand human behaviors and enable wide applications. In this article, we survey the state-of-the-art AC technologies comprehensively for large-scale heterogeneous multimedia data. We begin this survey by introducing the ty… Show more

Help me understand this report
View preprint versions

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

0
22
0

Year Published

2020
2020
2024
2024

Publication Types

Select...
6
2
1

Relationship

1
8

Authors

Journals

citations
Cited by 59 publications
(23 citation statements)
references
References 148 publications
0
22
0
Order By: Relevance
“…Moreover, some possessed a In line with most previous work on affect detection using dimensional representations, we address modeling viewers' emotional responses as a regression problem [19]. Support Vector Machines are a widely deployed approach when modeling affective responses to media content, especially in regression settings (see the reviews of technical work by Wang et al [66], and more recently Zhao et al [68]). For this reason, we use Support Vector Regressors with a Radial Basis Function (RBF)-kernel as predictors in our experiments.…”
Section: Response Datamentioning
confidence: 98%
See 1 more Smart Citation
“…Moreover, some possessed a In line with most previous work on affect detection using dimensional representations, we address modeling viewers' emotional responses as a regression problem [19]. Support Vector Machines are a widely deployed approach when modeling affective responses to media content, especially in regression settings (see the reviews of technical work by Wang et al [66], and more recently Zhao et al [68]). For this reason, we use Support Vector Regressors with a Radial Basis Function (RBF)-kernel as predictors in our experiments.…”
Section: Response Datamentioning
confidence: 98%
“…The insights gained by this act of emotional perspective-taking can complement any information offered by behavior in isolation, thereby enabling an observer to make accurate inferences even for ambiguous cases (e.g., [41]). However, context-sensitive approaches remain under-explored in automatic affect detection [23], despite researchers generally acknowledging their potential [60,66,68]. Likely causes for this neglect are the substantial challenges involved in (1) identifying relevant contextual influences for emotional responses in an application setting, as well as (2) developing technical solutions that provide automatic systems with an awareness of them [29].…”
Section: Introductionmentioning
confidence: 99%
“…The number of categories of emotion representation has always been controversial in psychology [10]. Researchers have focused widely on two emotion representation models deployed by psychologists: categorical or discrete emotion model (CEM) and dimensional emotion model (DEM) [27]. This dimensional emotion model (DEM) holds human emotion in dimension structure, where each dimension represents a characteristic of emotion.…”
Section: Emotion Representationmentioning
confidence: 99%
“…Liu 2018), images (Zhao et al 2017;Yang et al 2018b;Zhao et al 2018a;Yang et al 2018a;Zhao et al 2018c;2019c;2019b;Yao et al 2019;Zhan et al 2019), speech (El Ayadi, Kamel, and Karray 2011), physiological signals (Alarcao and Fonseca 2017;Zhao et al 2019a), and multi-modal data (Soleymani et al 2017;Zhao et al 2019d). Attention-Based Models: Since attention can be considered as a dynamic feature extraction mechanism that combines contextual fixations over time (Mnih et al 2014;Chen et al 2017), it has been seamlessly incorporated into deep learning architectures and achieved outstanding performances in many vision-related tasks, such as image classification (Woo et al 2018), image captioning (You et al 2016;Chen et al 2017;, and action recognition (Song et al 2017).…”
Section: Related Workmentioning
confidence: 99%