Proceedings of the 13th International Workshop on Semantic Evaluation 2019
DOI: 10.18653/v1/s19-2023
|View full text |Cite
|
Sign up to set email alerts
|

CLaC Lab at SemEval-2019 Task 3: Contextual Emotion Detection Using a Combination of Neural Networks and SVM

Abstract: This paper describes the CLaC Lab system at SemEval 2019, Task 3 (EmoContext), which focused on the contextual detection of emotions in a dataset of 3-round dialogues. For our final system, we used a neural network with pretrained ELMo word embeddings and POS tags as input, GRUs as hidden units, an attention mechanism to capture representations of the dialogues, and an SVM classifier which used the learned network representations to perform the task of multi-class classification. This system yielded a micro-av… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1

Citation Types

0
1
0

Year Published

2020
2020
2021
2021

Publication Types

Select...
1
1

Relationship

0
2

Authors

Journals

citations
Cited by 2 publications
(1 citation statement)
references
References 22 publications
0
1
0
Order By: Relevance
“…Text Embedding Layer: Owing to the success of transfer learning and pre-training of language models in NLP, we use Bidirectional Encoder Representations from Transformers (BERT) (Devlin et al, 2019) to encode the texts. BERT has shown to capture more contextual text representations as opposed to methods like word2vec (Hu et al, 2017), GloVe (Xu and Cohen, 2018), ELMO (Mohammadi et al, 2019). We encode each text t to a higher dimensional representation m = BERT(t) ∈ R d where d = 768, obtained by averaging the token level outputs from the final layer of BERT.…”
Section: Intra-day Textual Information Encodermentioning
confidence: 99%
“…Text Embedding Layer: Owing to the success of transfer learning and pre-training of language models in NLP, we use Bidirectional Encoder Representations from Transformers (BERT) (Devlin et al, 2019) to encode the texts. BERT has shown to capture more contextual text representations as opposed to methods like word2vec (Hu et al, 2017), GloVe (Xu and Cohen, 2018), ELMO (Mohammadi et al, 2019). We encode each text t to a higher dimensional representation m = BERT(t) ∈ R d where d = 768, obtained by averaging the token level outputs from the final layer of BERT.…”
Section: Intra-day Textual Information Encodermentioning
confidence: 99%