Findings of the Association for Computational Linguistics: EMNLP 2020 2020
DOI: 10.18653/v1/2020.findings-emnlp.224
|View full text |Cite
|
Sign up to set email alerts
|

COSMIC: COmmonSense knowledge for eMotion Identification in Conversations

Abstract: In this paper, we address the task of utterance level emotion recognition in conversations using commonsense knowledge. We propose COSMIC, a new framework that incorporates different elements of commonsense such as mental states, events, and causal relations, and build upon them to learn interactions between interlocutors participating in a conversation. Current state-of-theart methods often encounter difficulties in context propagation, emotion shift detection, and differentiating between related emotion clas… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1

Citation Types

1
89
0

Year Published

2021
2021
2023
2023

Publication Types

Select...
5
2

Relationship

0
7

Authors

Journals

citations
Cited by 175 publications
(128 citation statements)
references
References 30 publications
(30 reference statements)
1
89
0
Order By: Relevance
“…Conversations are acted and the actors either improvise or follow a script depending on the particular conversation. Like [ 60 ], we also use the six most common emotions, namely happy, sad, neutral, angry, excited, and frustrated, and the same data splits. In the future, however, we plan to use the Hourglass of Emotions [ 61 ] as categorization model.…”
Section: Generalization Performance Resultsmentioning
confidence: 99%
See 1 more Smart Citation
“…Conversations are acted and the actors either improvise or follow a script depending on the particular conversation. Like [ 60 ], we also use the six most common emotions, namely happy, sad, neutral, angry, excited, and frustrated, and the same data splits. In the future, however, we plan to use the Hourglass of Emotions [ 61 ] as categorization model.…”
Section: Generalization Performance Resultsmentioning
confidence: 99%
“…For additional comparison, state-of-the-art method COSMIC reports an F1 score of 0.6521 [ 60 ]. However this method uses a commonsense knowledge extractor trained on large knowledge bases.…”
Section: Generalization Performance Resultsmentioning
confidence: 99%
“…Text classification is widely used to evaluate the effectiveness of word embeddings in NLP tasks [36]. We adopted the Fudan dataset, which contained documents on 20 topics, for training 11 and testing 12 . Following [29], we selected 12,545 (6424 for training and 6121 for testing) documents on five topics: environment, agriculture, economy, politics, and sports.…”
Section: Text Classificationmentioning
confidence: 99%
“…Word embedding (also known as distributed word representation) denotes a word as a real-valued and lowdimensional vector. In recent years, it has attracted significant attention and has been applied to many natural language processing (NLP) tasks, such as sentiment classification [1][2][3][4][5], sentence/concept-level sentiment analysis [6][7][8][9][10], question answering [11,12], text analysis [13,14], named entity recognition [15,16], and text segmentation [17,18]. For instance, in some prior studies, word embeddings in a sentence were summed and averaged to obtain the probability of each sentiment [19].…”
Section: Introductionmentioning
confidence: 99%
See 1 more Smart Citation