Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conferen 2019
DOI: 10.18653/v1/d19-1558
|View full text |Cite
|
Sign up to set email alerts
|

Domain-Invariant Feature Distillation for Cross-Domain Sentiment Classification

Abstract: Cross-domain sentiment classification has drawn much attention in recent years. Most existing approaches focus on learning domaininvariant representations in both the source and target domains, while few of them pay attention to the domain-specific information. Despite the non-transferability of the domainspecific information, simultaneously learning domain-dependent representations can facilitate the learning of domain-invariant representations. In this paper, we focus on aspectlevel cross-domain sentiment cl… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

0
17
0

Year Published

2020
2020
2023
2023

Publication Types

Select...
3
3
2

Relationship

0
8

Authors

Journals

citations
Cited by 24 publications
(19 citation statements)
references
References 106 publications
0
17
0
Order By: Relevance
“…• BERT (Adv): It fine-tunes BERT by BERT (C) with an additional adversarial domain loss proposed in Hu et al (2019).…”
Section: General Experimental Resultsmentioning
confidence: 99%
See 2 more Smart Citations
“…• BERT (Adv): It fine-tunes BERT by BERT (C) with an additional adversarial domain loss proposed in Hu et al (2019).…”
Section: General Experimental Resultsmentioning
confidence: 99%
“…Additionally, models such as BERT do not have the "shared-private" architecture (Liu et al, 2017), frequently used for transfer learning. One can also replace L AD by asking the classifier to predict the flipped domain labels directly (Shu et al, 2018;Hu et al, 2019). Hence, we can instead minimize the flipped domain loss L F D :…”
Section: Learning Domain-invariant Representationsmentioning
confidence: 99%
See 1 more Smart Citation
“…Sentiment analysis has been studied under different settings in the literature (e.g., sentence-level, aspect-level, cross-domain) Chauhan et al, 2019;Hu et al, 2019). For ABSA, the early works have performed feature engineering to produce useful features for the statistical classification models (e.g., SVM) (Wagner et al, 2014).…”
Section: Related Workmentioning
confidence: 99%
“…To further improve the performance, methods of exploiting additional information have been proposed (Li et al 2018;Peng et al 2018;He et al 2018;Hu et al 2019;Bahdanau, Cho, and Bengio 2015). Among these works, Peng et al (2018) and He et al (2018) introduced semisupervised learning methods to utilize the target domain information.…”
Section: Introductionmentioning
confidence: 99%