2019
DOI: 10.48550/arxiv.1909.10681
|View full text |Cite
Preprint
|
Sign up to set email alerts
|

Knowledge-Enriched Transformer for Emotion Detection in Textual Conversations

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1
1

Citation Types

0
15
0

Year Published

2021
2021
2023
2023

Publication Types

Select...
4
3

Relationship

0
7

Authors

Journals

citations
Cited by 14 publications
(29 citation statements)
references
References 50 publications
0
15
0
Order By: Relevance
“…RNN-based methods: CNN+cLSTM (Poria et al 2017), DialogueRNN (Majumder et al 2019) KET (Zhong, Wang, and Miao 2019): A transformer structure with hierarchical self-attention and external commonsense knowledge.…”
Section: Baseline Methodsmentioning
confidence: 99%
See 2 more Smart Citations
“…RNN-based methods: CNN+cLSTM (Poria et al 2017), DialogueRNN (Majumder et al 2019) KET (Zhong, Wang, and Miao 2019): A transformer structure with hierarchical self-attention and external commonsense knowledge.…”
Section: Baseline Methodsmentioning
confidence: 99%
“…With the rise of Transformer and graph neural networks in NLP tasks, many works have also introduce them into the ERC task. Zhong, Wang, and Miao (2019) propose KET, which is a structure of hierarchical Transformers assisted by external commonsense knowledge. DialogueXL (Shen et al 2020) applies dialogue-aware self-attention to deal with the multiparty structures.…”
Section: Emotion Recognition In Conversationmentioning
confidence: 99%
See 1 more Smart Citation
“…Studies on ERC applied to text followed, mainly built on an artificial conversation dataset named DailyDialog (Li et al, 2017). (Zhong et al, 2019) incorporated a knowledge base into the network using context-aware attention and hierarchical self-attention using Transformers (Vaswani et al, 2017). Ghosal et al (2019) uses graph neural networks to deal with context propagation limitations.…”
Section: Related Workmentioning
confidence: 99%
“…We choose DailyDialog for comparison and reproducibility purposes, as it is often used for ERC. In this work, we use the train/val/test splits provided by (Zhong et al, 2019).…”
Section: Dailydialogmentioning
confidence: 99%