Proceedings of the 13th International Workshop on Semantic Evaluation 2019
DOI: 10.18653/v1/s19-2005
|View full text |Cite
|
Sign up to set email alerts
|

SemEval-2019 Task 3: EmoContext Contextual Emotion Detection in Text

Abstract: In this paper, we present the SemEval-2019 Task 3-EmoContext: Contextual Emotion Detection in Text. Lack of facial expressions and voice modulations make detecting emotions in text a challenging problem. For instance, as humans, on reading "Why don't you ever text me!" we can either interpret it as a sad or angry emotion and the same ambiguity exists for machines. However, the context of dialogue can prove helpful in detection of the emotion. In this task, given a textual dialogue i.e. an utterance along with … Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
3
1
1

Citation Types

2
124
0
2

Year Published

2019
2019
2021
2021

Publication Types

Select...
5
4
1

Relationship

0
10

Authors

Journals

citations
Cited by 206 publications
(168 citation statements)
references
References 34 publications
2
124
0
2
Order By: Relevance
“…The competition consists in classifying the emotion of an utterance given its conversational context. More formally, given a textual user utterance along with 2 turns of context in a conversation, the task is to classify the emotion of user utterance as Happy, Sad, Angry or Others (Chatterjee et al, 2019b). The conversations are extracted from Twitter.…”
Section: Introductionmentioning
confidence: 99%
“…The competition consists in classifying the emotion of an utterance given its conversational context. More formally, given a textual user utterance along with 2 turns of context in a conversation, the task is to classify the emotion of user utterance as Happy, Sad, Angry or Others (Chatterjee et al, 2019b). The conversations are extracted from Twitter.…”
Section: Introductionmentioning
confidence: 99%
“…Results are presented in Table 1 and compared with different approaches. EmoContext organisers proposed a baseline classifier (referred as Baseline) that exhibits a f1-score of 0.587 (Chatterjee et al, 2019). In Table 1, we compare the proposed method with the S-LSTM and F-DNN implementations, described in 3.4 and 3.5, respectively.…”
Section: Resultsmentioning
confidence: 99%
“…The composition of the dataset is described in Table 1. More details about the task can be found in the task description paper (Chatterjee et al, 2019).…”
Section: Shared Task Descriptionmentioning
confidence: 99%