2016
DOI: 10.1016/j.neucom.2015.01.095
|View full text |Cite
|
Sign up to set email alerts
|

Fusing audio, visual and textual clues for sentiment analysis from multimodal content

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
4
1

Citation Types

1
178
0
7

Year Published

2016
2016
2023
2023

Publication Types

Select...
6
4

Relationship

2
8

Authors

Journals

citations
Cited by 426 publications
(209 citation statements)
references
References 43 publications
1
178
0
7
Order By: Relevance
“…A further subset of tweets (categorized as General) was collected from the stream, without any keyword filters, in order to further broaden the representative scope of our dataset. We additionally filtered out tweets containing external links or images, arguing that analysis of these multimodal tweets is a separate problem, belonging to the domain of Multimodal Sentiment Analysis (Poria et al, 2016;Soleymani et al, 2017). After the entire filtering process 4 was complete, we obtained 7,026 tweets across the different topics, which would be annotated with 5x coverage.…”
Section: Data Collectionmentioning
confidence: 99%
“…A further subset of tweets (categorized as General) was collected from the stream, without any keyword filters, in order to further broaden the representative scope of our dataset. We additionally filtered out tweets containing external links or images, arguing that analysis of these multimodal tweets is a separate problem, belonging to the domain of Multimodal Sentiment Analysis (Poria et al, 2016;Soleymani et al, 2017). After the entire filtering process 4 was complete, we obtained 7,026 tweets across the different topics, which would be annotated with 5x coverage.…”
Section: Data Collectionmentioning
confidence: 99%
“…Some recent work has also been conducted on fusing different modalities to detect emotions and polarity from videos [6], [7], [8]. This paper conducts extensive research on the different facets of this topic and aims to solve the following two questions: 1) Is a common framework useful for both multimodal emotion recognition and multimodal sentiment analysis?…”
Section: Introductionmentioning
confidence: 99%
“…While the visual and the audio modalities provide additional evidence that improves classification accuracy, we found the textual modality to have the greater impact on the result (Cambria and Hussain, 2015;Cambria et al, 2013c;Poria et al, 2015a;2015b).…”
Section: Introductionmentioning
confidence: 74%