Proceedings of the 11th International Workshop on Semantic Evaluation (SemEval-2017) 2017
DOI: 10.18653/v1/s17-2088
|View full text |Cite
|
Sign up to set email alerts
|

SemEval-2017 Task 4: Sentiment Analysis in Twitter

Abstract: This paper describes the fifth year of the Sentiment Analysis in Twitter task. SemEval-2017 Task 4 continues with a rerun of the subtasks of SemEval-2016 Task 4, which include identifying the overall sentiment of the tweet, sentiment towards a topic with classification on a twopoint and on a five-point ordinal scale, and quantification of the distribution of sentiment towards a topic across a number of tweets: again on a two-point and on a five-point ordinal scale. Compared to 2016, we made two changes: (i) we… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
3
1
1

Citation Types

2
502
0
5

Year Published

2017
2017
2024
2024

Publication Types

Select...
5
3

Relationship

0
8

Authors

Journals

citations
Cited by 664 publications
(509 citation statements)
references
References 68 publications
2
502
0
5
Order By: Relevance
“…Tables 2-4 summarize the results. For subtasks A & B, recall and F 1 scores are assessed as averaged scores according to the task organizers (see Rosenthal et al 2017 for detailed discussion on the evaluation metrics). The test data provided by SemEval-2017 Task 4 is so far one of the largest annotated sentiment analysis test datasets.…”
Section: Resultsmentioning
confidence: 99%
See 1 more Smart Citation
“…Tables 2-4 summarize the results. For subtasks A & B, recall and F 1 scores are assessed as averaged scores according to the task organizers (see Rosenthal et al 2017 for detailed discussion on the evaluation metrics). The test data provided by SemEval-2017 Task 4 is so far one of the largest annotated sentiment analysis test datasets.…”
Section: Resultsmentioning
confidence: 99%
“…In the following, we describe our approach and present supportive findings evaluated on a large set of test data provided by SemEval-2017 Task 4 (Rosenthal et al, 2017).…”
Section: Introductionmentioning
confidence: 99%
“…To run our experiments, we used datasets provided by the task organizers (Rosenthal et al, 2017) as follows. During evaluation, we trained our models on the TRAIN set, and evaluated our different systems on the DEV set.…”
Section: Datasets and Preprocessingmentioning
confidence: 99%
“…In this paper, we present the different systems we developed as part of our participation in SemEval-2017 Task 4 on Sentiment Analysis in Twitter (Rosenthal et al, 2017). This task covers both English and Arabic languages.…”
Section: Introductionmentioning
confidence: 99%
“…Task 4 of SemEval 2017, Sentiment Analysis in Twitter (Rosenthal et al, 2017), has included some new subtasks this year. One of these subtasks considers user information to be also integrated in proposed systems.…”
Section: Introductionmentioning
confidence: 99%