Proceedings of the 7th ACM IKDD CoDS and 25th COMAD 2020
DOI: 10.1145/3371158.3371194
|View full text |Cite
|
Sign up to set email alerts
|

Weakly-Supervised Deep Learning for Domain Invariant Sentiment Classification

Abstract: The task of learning a sentiment classification model that adapts well to any target domain, different from the source domain, is a challenging problem. Majority of the existing approaches focus on learning a common representation by leveraging both source and target data during training. In this paper, we introduce a two-stage training procedure that leverages weakly supervised datasets for developing simple lift-and-shift-based predictive models without being exposed to the target domain during the training … Show more

Help me understand this report
View preprint versions

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1

Citation Types

0
2
0

Year Published

2021
2021
2022
2022

Publication Types

Select...
2
2

Relationship

0
4

Authors

Journals

citations
Cited by 4 publications
(2 citation statements)
references
References 26 publications
0
2
0
Order By: Relevance
“…Given the binary nature of these labels, they only suggest whether a review is positive or negative, not how positive or negative it is. Scholars find that the BERT and related sentiment-analysis models trained on the IMDb generalize well to other datasets and contexts (Xie et al 2019, Kayal, Singh andGoyal 2020). When Kayal et al (2020) generalized the BERT model fine-tuned on the IMDb to sentiment analysis based on various Amazon, Yelp, weather, and scientific reviews, their corresponding F1 scores, defined as the harmonic mean of the precision and recall (sensitivity), varied from 80.5 percent to 92.5 percent (the average is 86.5 percent).…”
Section: Sentiment Analysis Using Bidirectional Encoder Representatio...mentioning
confidence: 98%
“…Given the binary nature of these labels, they only suggest whether a review is positive or negative, not how positive or negative it is. Scholars find that the BERT and related sentiment-analysis models trained on the IMDb generalize well to other datasets and contexts (Xie et al 2019, Kayal, Singh andGoyal 2020). When Kayal et al (2020) generalized the BERT model fine-tuned on the IMDb to sentiment analysis based on various Amazon, Yelp, weather, and scientific reviews, their corresponding F1 scores, defined as the harmonic mean of the precision and recall (sensitivity), varied from 80.5 percent to 92.5 percent (the average is 86.5 percent).…”
Section: Sentiment Analysis Using Bidirectional Encoder Representatio...mentioning
confidence: 98%
“…Russel and Mehrabian's VAD model [23] for instance interprets emotions as points in a 3-D space with Valence (degree of pleasure or displeasure), Arousal (degree of calmness or excitement), and Dominance (degree of authority or submission) being the three orthogonal dimensions. Accordingly, the literature on text-based emotion analysis can be broadly divided into coarse-grained classification systems [10,[12][13][14]28] and fine-grained regression systems [22,24,29,30]. Although a coarse-grained approach is better-suited for the task of detecting emotions from tweets as observed in [4], prior works fail to exploit the direct correlation between the two models of emotion representation for finer interpretation.…”
Section: Introductionmentioning
confidence: 99%