Proceedings of the 28th International Conference on Computational Linguistics 2020
DOI: 10.18653/v1/2020.coling-main.360
|View full text |Cite
|
Sign up to set email alerts
|

An Iterative Emotion Interaction Network for Emotion Recognition in Conversations

Abstract: Emotion recognition in conversations (ERC) has received much attention recently in the natural language processing community. Considering that the emotions of the utterances in conversations are interactive, previous works usually implicitly model the emotion interaction between utterances by modeling dialogue context, but the misleading emotion information from context often interferes with the emotion interaction. We noticed that the gold emotion labels of the context utterances can provide explicit and accu… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1
1

Citation Types

0
9
0

Year Published

2021
2021
2023
2023

Publication Types

Select...
4
3
1

Relationship

0
8

Authors

Journals

citations
Cited by 28 publications
(11 citation statements)
references
References 17 publications
0
9
0
Order By: Relevance
“…A substantial number of approaches are heavily based on RNNs (e.g., GRU) to model the sequence (Jiao et al, 2019), (Lu et al, 2020). The biggest problem with this is that it inherently decouples the word embedding extraction and the sequence modeling, whereas BERT-like models tackle them at once, often leading to a better performance.…”
Section: Related Workmentioning
confidence: 99%
See 1 more Smart Citation
“…A substantial number of approaches are heavily based on RNNs (e.g., GRU) to model the sequence (Jiao et al, 2019), (Lu et al, 2020). The biggest problem with this is that it inherently decouples the word embedding extraction and the sequence modeling, whereas BERT-like models tackle them at once, often leading to a better performance.…”
Section: Related Workmentioning
confidence: 99%
“…MELD IEMOCAP BERT+MTL (Li et al, 2020b) 61.90 -BiERU-lc (Li et al, 2020c) 60.84 64.65 DialogueGCN (Ghosal et al, 2019) 58.1 64.18 RGAT (Ishiwatari et al, 2020) 60.91 65.22 CESTa 58.36 67.1 VHRED (Hazarika et al, 2021) -58.6 SumAggGIN (Sheng et al, 2020) 58.45 66.61 COSMIC (Ghosal et al, 2020) 65.21 65.28 KET (Zhong et al, 2019b) 58.18 59.56 BiF-AGRU (Jiao et al, 2019) 58.1 63.5 Iterative (Lu et al, 2020) 60.72 64.37 HiTrans (Li et al, 2020a) 61.94 64.5 DialogXL 62 EmoBERTa shows very good results: max weighted f 1 scores (%) of 65.61 (MELD) and 68.57 (IEMOCAP) respectively and above the best reported SOTA, especially considering that no modifications were made to the original RoBERTa model architecture. We also trained a model without the speaker names prepended, which drops the performance: weighted f 1 scores (%) of 65.07 (MELD) and 64.02 (IEMOCAP) respectively, pro-viding evidence that encoding the speaker information helps.…”
Section: Modelmentioning
confidence: 99%
“…modeled ERC as sequence tagging to learn the emotional consistency. Lu et al (2020) proposed an iterative emotion interaction network to explicitly model the emotion interaction.…”
Section: Emotion Recognition In Conversationsmentioning
confidence: 99%
“…2) The integration of emotional clues. Many works Ghosal et al, 2019;Lu et al, 2020) usually use the attention mechanism to integrate encoded emotional clues, ignoring their intrinsic semantic order. It would lose logical relationships between clues, making it difficult to capture key factors that trigger emotions.…”
Section: Introductionmentioning
confidence: 99%
“…IEIN (Lu et al, 2020): IEIN uses predicted emotion labels instead of gold labels and designs a loss to constrain the prediction of each iteration.…”
Section: Baselines and State Of The Artmentioning
confidence: 99%