Proceedings of the 10th International Workshop on Semantic Evaluation (SemEval-2016) 2016
DOI: 10.18653/v1/s16-1002
|View full text |Cite
|
Sign up to set email alerts
|

SemEval-2016 Task 5: Aspect Based Sentiment Analysis

Abstract: This paper describes the SemEval 2016 shared task on Aspect Based Sentiment Analysis (ABSA), a continuation of the respective tasks of 2014 and 2015. In its third year, the task provided 19 training and 20 testing datasets for 8 languages and 7 domains, as well as a common evaluation procedure. From these datasets, 25 were for sentence-level and 14 for text-level ABSA; the latter was introduced for the first time as a subtask in SemEval. The task attracted 245 submissions from 29 teams.

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
3
1
1

Citation Types

0
659
0
11

Year Published

2017
2017
2022
2022

Publication Types

Select...
4
2
1

Relationship

0
7

Authors

Journals

citations
Cited by 1,093 publications
(758 citation statements)
references
References 9 publications
0
659
0
11
Order By: Relevance
“…Aspect extraction has traditionally been associated with the sentiment analysis community (Liu, 2012;Pontiki et al, 2016), with the goal being to decompose a small document of text (e.g., a review) into multiple facets, each of which may possess their own sentiment marker. For example, a restaurant review may comment on the ambiance, service, and food, preventing the assignment of a uniform sentiment over the entire review.…”
Section: Introductionmentioning
confidence: 99%
“…Aspect extraction has traditionally been associated with the sentiment analysis community (Liu, 2012;Pontiki et al, 2016), with the goal being to decompose a small document of text (e.g., a review) into multiple facets, each of which may possess their own sentiment marker. For example, a restaurant review may comment on the ambiance, service, and food, preventing the assignment of a uniform sentiment over the entire review.…”
Section: Introductionmentioning
confidence: 99%
“…Therefore, the commonly used word similarity datasets are used to evaluate the proposed semantic similarity method and graphbased IC based on WordNet and DBpedia. Moreover, the semantic similarity methods are evaluated in an aspect category classification task (Pontiki et al, 2015(Pontiki et al, , 2016a in order to evaluate their performance in a real application. This section presents the datasets, implementation, evaluation and provides a brief discussion about the obtained experimental results.…”
Section: Semantic Similarity Evaluationmentioning
confidence: 99%
“…Based on this observation, we proposed a similarity-based framework for concept classification, in which concept's features are represented by frequent collocated words while feature vectors are constructed by computing semantic similarity between input words and feature words. We demonstrate the similarity-based classification framework in the Aspect Based Sentiment Analysis (ABSA) (Pontiki et al, 2015(Pontiki et al, , 2016a task of aspect category classification by proposing both unsupervised model and supervised model.…”
Section: Similarity-based Classificationmentioning
confidence: 99%
See 2 more Smart Citations