Proceedings of the 9th International Workshop on Semantic Evaluation (SemEval 2015) 2015
DOI: 10.18653/v1/s15-2101
|View full text |Cite
|
Sign up to set email alerts
|

Swiss-Chocolate: Combining Flipout Regularization and Random Forests with Artificially Built Subsystems to Boost Text-Classification for Sentiment

Abstract: We describe a classifier for predicting message-level sentiment of English microblog messages from Twitter. This paper describes our submission to the SemEval-2015 competition (Task 10). Our approach is to combine several variants of our previous year's SVM system into one meta-classifier, which was then trained using a random forest. The main idea is that the meta-classifier allows the combination of the strengths and overcome some of the weaknesses of the artificially-built individual classifiers, and adds a… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2

Citation Types

0
2
0

Year Published

2017
2017
2017
2017

Publication Types

Select...
1

Relationship

1
0

Authors

Journals

citations
Cited by 1 publication
(2 citation statements)
references
References 5 publications
0
2
0
Order By: Relevance
“…For comparison, we implemented a featurebased system using a Support Vector Machine (SVM). Feature selection is based on the system described in (Uzdilli et al, 2015), which ranked 8th in the Semeval competition of 2015, and include n-gram, various lexical features, and statistical text properties. We use the macro-averages F1-score of positive and negative class, i.e.…”
Section: Benchmark For German Sentiment Analysismentioning
confidence: 99%
See 1 more Smart Citation
“…For comparison, we implemented a featurebased system using a Support Vector Machine (SVM). Feature selection is based on the system described in (Uzdilli et al, 2015), which ranked 8th in the Semeval competition of 2015, and include n-gram, various lexical features, and statistical text properties. We use the macro-averages F1-score of positive and negative class, i.e.…”
Section: Benchmark For German Sentiment Analysismentioning
confidence: 99%
“…We now study how the CNN performs when trained and/or tested on the three German sentiment corpora we are aware of: SB10k (from this paper, 9738 tweets), MGS corpus (109'130 tweets, (Mozetič et al, 2016)), and DAI corpus (1800 tweets, (Narr et al, 2012) For comparison, we implemented a featurebased system using a Support Vector Machine (SVM). Feature selection is based on the system described in (Uzdilli et al, 2015), which ranked 8th in the Semeval competition of 2015, and include n-gram, various lexical features, and statistical text properties. We use the macro-averages F1-score of positive and negative class, i.e.…”
Section: Benchmark For German Sentiment Analysismentioning
confidence: 99%