Proceedings of the 13th International Workshop on Semantic Evaluation 2019
DOI: 10.18653/v1/s19-2042
|View full text |Cite
|
Sign up to set email alerts
|

LIRMM-Advanse at SemEval-2019 Task 3: Attentive Conversation Modeling for Emotion Detection and Classification

Abstract: This paper addresses the problem of modeling textual conversations and detecting emotions. Our proposed model makes use of 1) deep transfer learning rather than the classical shallow methods of word embedding; 2) self-attention mechanisms to focus on the most important parts of the texts and 3) turnbased conversational modeling for classifying the emotions. Our model was evaluated on the data provided by the SemEval-2019 shared task on contextual emotion detection in text. The model shows very competitive resu… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1

Citation Types

0
2
0

Year Published

2019
2019
2024
2024

Publication Types

Select...
6

Relationship

0
6

Authors

Journals

citations
Cited by 7 publications
(3 citation statements)
references
References 11 publications
(8 reference statements)
0
2
0
Order By: Relevance
“…Ragheb et al 81 presented an attention‐based model for detecting and classifying emotions in textual conversations. The data used was provided by the SemEval‐2019 Task 3 competition organizers.…”
Section: Detection Approaches and Related Workmentioning
confidence: 99%
“…Ragheb et al 81 presented an attention‐based model for detecting and classifying emotions in textual conversations. The data used was provided by the SemEval‐2019 Task 3 competition organizers.…”
Section: Detection Approaches and Related Workmentioning
confidence: 99%
“…Additionally, numerous research studies have explored the application of deep learning models for text-based emotion analysis. Ragheb et al [21] used Bidirectional-Long Short-Term Memory units to detect emotions in textual conversations. Keshavarz & Abadeh [22] utilized a CNN model and achieved significant improvement compared to traditional machine learning approaches.…”
Section: Related Workmentioning
confidence: 99%
“…The BERT-CNN performance is compared to performances of the state-of-the-art models using two datasets mentioned in the previous sections. These models are Emotdet [16], EMODET 2 [14], Nture [17], SCIA [18], Coastal [15], PKUSE [19], EPITA-ADAPT [20], Figure Eight [21], NELEC [22], THU NGN [23], LIRMM [24], NTUA-ISLab [25], Syman to Research [26], ANA [27], CAiRE-HKUST [28], GenSMT [29], SNU_IDS [30], CLARK [31], and SINAI [32].…”
Section: Compared Algorithmsmentioning
confidence: 99%