Proceedings of the 10th International Workshop on Semantic Evaluation (SemEval-2016) 2016
DOI: 10.18653/v1/s16-1061
|View full text |Cite
|
Sign up to set email alerts
|

INF-UFRGS-OPINION-MINING at SemEval-2016 Task 6: Automatic Generation of a Training Corpus for Unsupervised Identification of Stance in Tweets

Abstract: This paper describe a weakly supervised solution for detecting stance in tweets, submitted to the SemEval 2016 Stance Task. Our approach is based on the premise that stance can be exposed as positive or negative opinions, although not necessarily about the stance target itself. Our system receives as input ngrams representing opinion targets and common terms used to denote stance (e.g. hashtags), and use these features, together with the sentiment detection solutions, to automatically compose a large training … Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1
1

Citation Types

0
37
0
5

Year Published

2016
2016
2022
2022

Publication Types

Select...
4
3
2

Relationship

1
8

Authors

Journals

citations
Cited by 28 publications
(42 citation statements)
references
References 6 publications
0
37
0
5
Order By: Relevance
“…One way that was suggested to handle the unmentioned target entity in text is to analyze the opinion to the opponent of the entity or supporter of the entity. For example, [17] constructed a list of keywords that identifies Trump using a dataset labeled with stances toward Hillary. Using this list of keywords help in detecting the unexpressed stand towards Trump.…”
Section: Related Workmentioning
confidence: 99%
“…One way that was suggested to handle the unmentioned target entity in text is to analyze the opinion to the opponent of the entity or supporter of the entity. For example, [17] constructed a list of keywords that identifies Trump using a dataset labeled with stances toward Hillary. Using this list of keywords help in detecting the unexpressed stand towards Trump.…”
Section: Related Workmentioning
confidence: 99%
“…By training conditional encoding models 3 Note that "|" indiates "or", ( ?) indicates optional space Dias and Becker (2016) on automatically labelled stance detection data we achieve state-of-the-art results. The best result (F1 of 0.5803) is achieved with the bi-directional conditional encoding model (BiCond).…”
Section: Resultsmentioning
confidence: 91%
“…The goal of experiments reported in this section is to compare against participants in the SemEval 2016 Stance Detection Task B. While we consider an unseen target setup, most submissions, including the three highest ranking ones for Task B, pkudblab (Wei et al, 2016), LitisMind (Zarrella andMarsh, 2016) and INF-UFRGS (Dias and Becker, 2016) considered a different experimental setup. They automatically annotated training data for the test target Donald Trump, thus converting the task into weakly supervised seen target stance detection.…”
Section: Weakly Supervised Stance Detectionmentioning
confidence: 99%
“…Table III shows the comparison of our proposed system with the existing state-of-the-art system of SemEval 2016 Task 6 for the sentiment dataset. [8] used feature-based SVM, [40] used keyword rules, LitisMind relied on hashtag rules on external data, [39] utilized a combination of sentiment classifiers and rules, whereas [38] used a maximum entropy classifier with domain-specific features. Our system comfortably surpasses the existing best system at SemEval.…”
Section: Modelsmentioning
confidence: 99%