2023
DOI: 10.1609/aaai.v37i11.26599
|View full text |Cite
|
Sign up to set email alerts
|

Reducing Sentiment Bias in Pre-trained Sentiment Classification via Adaptive Gumbel Attack

Jiachen Tian,
Shizhan Chen,
Xiaowang Zhang
et al.

Abstract: Pre-trained language models (PLMs) have recently enabled rapid progress on sentiment classification under the pre-train and fine-tune paradigm, where the fine-tuning phase aims to transfer the factual knowledge learned by PLMs to sentiment classification. However, current fine-tuning methods ignore the risk that PLMs cause the problem of sentiment bias, that is, PLMs tend to inject positive or negative sentiment from the contextual information of certain entities (or aspects) into their word embeddings, leadin… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...

Citation Types

0
0
0

Year Published

2024
2024
2024
2024

Publication Types

Select...
1

Relationship

0
1

Authors

Journals

citations
Cited by 1 publication
references
References 26 publications
0
0
0
Order By: Relevance