Proceedings of the 12th International Workshop on Semantic Evaluation 2018
DOI: 10.18653/v1/s18-1018
|View full text |Cite
|
Sign up to set email alerts
|

UWB at SemEval-2018 Task 1: Emotion Intensity Detection in Tweets

Abstract: This paper describes our system created for the SemEval-2018 Task 1: Affect in Tweets (AIT-2018). We participated in both the regression and the ordinal classification subtasks for emotion intensity detection in English, Arabic, and Spanish.For the regression subtask we use the AffectiveTweets system with added features using various word embeddings, lexicons, and LDA. For the ordinal classification we additionally use our Brainy system with features using parse tree, POS tags, and morphological features. The … Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1

Citation Types

0
2
0

Year Published

2018
2018
2019
2019

Publication Types

Select...
2

Relationship

1
1

Authors

Journals

citations
Cited by 2 publications
(2 citation statements)
references
References 23 publications
0
2
0
Order By: Relevance
“…In the literature wide range of features have been explored in the task of tweet sentiment analysis including unigrams, bigrams, n-grams, part-of-speech (POS) tags, word embedding, word clusters [23], [24], [25], [26], [7], [27]. In this work we use TweetToSparseFeatureVector filter in Weka Affective tweets [4] package to extract word n-grams, character n-grams, brown word clusters and part-ofspeech tags.…”
Section: Tweet Feature Extractionmentioning
confidence: 99%
“…In the literature wide range of features have been explored in the task of tweet sentiment analysis including unigrams, bigrams, n-grams, part-of-speech (POS) tags, word embedding, word clusters [23], [24], [25], [26], [7], [27]. In this work we use TweetToSparseFeatureVector filter in Weka Affective tweets [4] package to extract word n-grams, character n-grams, brown word clusters and part-ofspeech tags.…”
Section: Tweet Feature Extractionmentioning
confidence: 99%
“…Tweets often contain slang expressions, misspelled words, emoticons or abbreviations and it is needed to make some preprocessing steps before training and making predictions. We use a similar approach to Přibáň et al (2018).…”
Section: Tweets Preprocessingmentioning
confidence: 99%