2023
DOI: 10.21203/rs.3.rs-2699220/v1
|View full text |Cite
Preprint
|
Sign up to set email alerts
|

A Simple Weakly-Supervised Contrastive Learning Framework for Few-shot Sentiment Classification

Abstract: Most existing deep learning-based sentiment classification methods need large human-annotated data, but labeling large amounts of high-quality emotional texts is labor-intensive. Users on various social platforms generate massive amounts of tagged opinionated text (e.g., tweets, customer reviews), providing a new resource for training deep models. However, some of the tagged instances have sentiment tags that are diametrically opposed to their true semantics. We cannot use this tagged data directly because the… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
3
2

Citation Types

0
44
0

Year Published

2023
2023
2024
2024

Publication Types

Select...
5
1
1

Relationship

0
7

Authors

Journals

citations
Cited by 31 publications
(58 citation statements)
references
References 33 publications
0
44
0
Order By: Relevance
“…For instance, training on gaussian blurred images does not guarantee a performance increase in motion blur images (Geirhos et al, 2018b ). Other proposed methods include training on style-transferred images (Geirhos et al, 2018a ), training on adversarial images (Hendrycks and Dietterich, 2019 ), training on simulated noisy virtual images (Temel et al, 2017 ), and self-supervised methods like SimCLR (Chen et al, 2020 ) that train by augmenting distortions. Augmix (Hendrycks et al, 2019 ) creates multiple chains of augmentations to train the base network.…”
Section: Resultsmentioning
confidence: 99%
See 1 more Smart Citation
“…For instance, training on gaussian blurred images does not guarantee a performance increase in motion blur images (Geirhos et al, 2018b ). Other proposed methods include training on style-transferred images (Geirhos et al, 2018a ), training on adversarial images (Hendrycks and Dietterich, 2019 ), training on simulated noisy virtual images (Temel et al, 2017 ), and self-supervised methods like SimCLR (Chen et al, 2020 ) that train by augmenting distortions. Augmix (Hendrycks et al, 2019 ) creates multiple chains of augmentations to train the base network.…”
Section: Resultsmentioning
confidence: 99%
“…In Table 3 , we compare the Top-1 accuracy between perception-only inference and our proposed stochastic surprisal-based inference. All the state-of-the-art techniques require additional training data—noisy images (Vasiljevic et al, 2016 ), adversarial images (Hendrycks and Dietterich, 2019 ), self-supervision SimCLR augmentations (Chen et al, 2020 ), and augmentation chains (Hendrycks et al, 2019 ). We term these perception-only techniques as f ′(·) and we actively infer on top of them.…”
Section: Resultsmentioning
confidence: 99%
“…Subsequently, we show that baking ARCO into contrastive pre-training (i.e., MONA [84]) provides an efficient pixel-wise contrastive learning paradigm to train deep networks that generalize well beyond training data. ARCO is easy to implement, being built on top of off-the-shelf pixellevel contrastive learning framework [5,21,37,41,69], and consistently improves overall segmentation quality across all label ratios and datasets.…”
Section: Introductionmentioning
confidence: 99%
“…Recently, a significant amount of research efforts [4,48,52,76,89,96,97] have resorted to unsupervised or semisupervised learning techniques for improving the segmentation robustness. One of the most effective methods is contrastive learning (CL) [21,39,41,59]. It aims to learn useful representations by contrasting semantically similar (positive) and dissimilar (negative) pairs of data points sampled from the massive unlabeled data.…”
Section: Introductionmentioning
confidence: 99%
See 1 more Smart Citation