2018
DOI: 10.1007/978-3-030-00810-9_15
|View full text |Cite
|
Sign up to set email alerts
|

Studying the Effects of Text Preprocessing and Ensemble Methods on Sentiment Analysis of Brazilian Portuguese Tweets

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

0
6
0

Year Published

2019
2019
2024
2024

Publication Types

Select...
4
1

Relationship

0
5

Authors

Journals

citations
Cited by 5 publications
(7 citation statements)
references
References 9 publications
0
6
0
Order By: Relevance
“…Work studying effects of text preprocessing on SA of Brazilian Portuguese tweets [1] uses heuristics in machine learning algorithms to obtain an accuracy and polarity bias, resulting in increased classification accuracy.…”
Section: A Related Workmentioning
confidence: 99%
See 2 more Smart Citations
“…Work studying effects of text preprocessing on SA of Brazilian Portuguese tweets [1] uses heuristics in machine learning algorithms to obtain an accuracy and polarity bias, resulting in increased classification accuracy.…”
Section: A Related Workmentioning
confidence: 99%
“…People across the world give online reviews on products/services. Such data holds value to various stakeholders, creating need for efficient ways to scrape and process this data [1]. In using natural language processing (NLP) techniques, effective preprocessing allows improved detection / interpretation, with significant effects on the results [2].…”
Section: Introductionmentioning
confidence: 99%
See 1 more Smart Citation
“…Portuguese is the lack of standard annotated datasets. For the accomplishment of the present work we have considered some of the few recent corpus of tweets in Brazilian Portuguese cited in the literature, including the PELESent [3], Tweet-SentBR [4] and BRTweetSentCorpus [5].…”
Section: Datasets For Brazilian Portuguese Sentiment Analysismentioning
confidence: 99%
“…Since we are working with a perfectly balanced dataset, the metric accuracy (percentage of total items classified correctly) seems to be the most adequate to evaluate the models. The experiments used a subset of BRTweetSentCorpus with 3180 positive and 3180 negative tweets, as done by [5], in order to have a baseline to compare our approach. 2 It was used the binary cross-entropy loss function and the adam optimization algorithm to update network weights.…”
Section: Convolutional Neural Net Architecturementioning
confidence: 99%