Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics 2020
DOI: 10.18653/v1/2020.acl-main.343
|View full text |Cite
|
Sign up to set email alerts
|

CH-SIMS: A Chinese Multimodal Sentiment Analysis Dataset with Fine-grained Annotation of Modality

Abstract: Previous studies in multimodal sentiment analysis have used limited datasets, which only contain unified multimodal annotations. However, the unified annotations do not always reflect the independent sentiment of single modalities and limit the model to capture the difference between modalities. In this paper, we introduce a Chinese single-and multimodal sentiment analysis dataset, CH-SIMS, which contains 2,281 refined video segments in the wild with both multimodal and independent unimodal annotations. It all… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

0
59
0

Year Published

2020
2020
2023
2023

Publication Types

Select...
5
2
1

Relationship

1
7

Authors

Journals

citations
Cited by 168 publications
(82 citation statements)
references
References 26 publications
0
59
0
Order By: Relevance
“…In this work, experiments are conducted on two public multimodal sentiment analysis datasets, MOSI [31] and SIMS [28]. The basic statistics of each dataset are shown in Table 1.…”
Section: Datasetsmentioning
confidence: 99%
See 1 more Smart Citation
“…In this work, experiments are conducted on two public multimodal sentiment analysis datasets, MOSI [31] and SIMS [28]. The basic statistics of each dataset are shown in Table 1.…”
Section: Datasetsmentioning
confidence: 99%
“…SIMS. The SIMS dataset [28] is a Chinese MSA benchmark dataset with fine-grained uni-modal annotations. The dataset comprises of 2,281 refined video clips collected from different movies, TV serials, and variety shows with spontaneous expressions, various head poses, occlusions, and illuminations.…”
Section: Datasetsmentioning
confidence: 99%
“…10 CH-SIMS The dataset contains 2,281 refined video segments in the wild with both multi-modal and independent unimodal annotations. 2020 [74] Informativeness and Consistency are usually evaluated by human evaluation.…”
Section: Ubuntumentioning
confidence: 99%
“…It contains over 23, 000 sentences from across 1000 speakers and 250 topics. CH-SIMS (Yu et al, 2020) is a dataset of Chinese multimodal sentiment analysis with fine-grained annotations of sentiment per modality. IEMOCAP (Busso et al, 2008) is an inlab recorded dataset which consists of 151 videos of scripted dialogues between acting participants.…”
Section: Related Resourcesmentioning
confidence: 99%
“…On a daily basis across the world, intentions and emotions are conveyed through joint utilization of these three modalities. While English, Chinese, and Spanish languages have resources for computational analysis of multimodal language (focusing on analysis of sentiment, subjectivity, or emotions (Yu et al, 2020;Zadeh et al, 2018b;Park et al, 2014;Wöllmer et al, 2013;Poria et al, 2020)), other commonly spoken languages across the globe lag behind. As Artificial Intelligence (AI) increasingly blends into everyday life across the globe, there is a genuine need for intelligent entities capable of understanding multimodal language in different cultures.…”
Section: Introductionmentioning
confidence: 99%