Proceedings of the 2017 Conference on Empirical Methods in Natural Language Processing 2017
DOI: 10.18653/v1/d17-1053
|View full text |Cite
|
Sign up to set email alerts
|

Towards a Universal Sentiment Classifier in Multiple languages

Abstract: Existing sentiment classifiers usually work for only one specific language, and different classification models are used in different languages. In this paper we aim to build a universal sentiment classifier with a single classification model in multiple different languages. In order to achieve this goal, we propose to learn multilingual sentiment-aware word embeddings simultaneously based only on the labeled reviews in English and unlabeled parallel data available in a few language pairs. It is not required t… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1
1

Citation Types

0
19
0

Year Published

2018
2018
2024
2024

Publication Types

Select...
7
3

Relationship

1
9

Authors

Journals

citations
Cited by 17 publications
(19 citation statements)
references
References 28 publications
0
19
0
Order By: Relevance
“…Prettenhofer and Stein (2010) 2 Xiao and Guo (2013) 3Pham et al (2015) 4 Xu and Wan (2017) 5Xu and Yang (2017) …”
mentioning
confidence: 99%
“…Prettenhofer and Stein (2010) 2 Xiao and Guo (2013) 3Pham et al (2015) 4 Xu and Wan (2017) 5Xu and Yang (2017) …”
mentioning
confidence: 99%
“…For this extended abstract, we have updated the list of baseline methods by adding new approaches (TCT [Huang et al, 2017], TrAdaB [Huang et al, 2017, DANN [Ganin et al, 2016], CL-TS [Zhou et al, 2015], Bi-PV [Xu and Wan, 2017], BiDRL [Zhou et al, 2016b], WSDNNs, [Zhou et al, 2016a], CLDFA [Xu and Yang, 2017]) which have been published in the cross-domain and cross-lingual arena after our original work [Moreo Fernández et al, 2016], and kept those which performed best in our original evaluation (SCL-MI [Blitzer et al, 2007], SFA [Pan et al, 2010, SDA [Glorot et al, 2011], and SSMC [Xiao and Guo, 2014]). We also consider an upper bound that trains the classifier on the training set of the target domain ("Upper"), and a lower bound that trains the classifier on the source domain and then applies the trained classifier directly in the target domain, i.e., without carrying out any sort of knowledge transfer ("No-Trans").…”
Section: Methodsmentioning
confidence: 99%
“…However, single-source CLT methods would incur the risk of negative transfer when there exists a large language shift. Alternately, multi-source CLT (McDonald et al, 2011;Xu and Wan, 2017;, transferring from multiple source languages, has been proved to increase the stability of the transfer. Another research efforts made on cross-lingual word representation learning (Zou et al, 2013;Mikolov et al, 2013;Conneau et al, 2018a;Artetxe et al, 2018) and mPLM (Devlin et al, 2019;Lample and Conneau, 2019;Eisenschlos et al, 2019;Chidambaram et al, 2019), which exploit unsupervised learning on large-scale multilingual corpus to learn versatile multilingual contextualized embeddings.…”
Section: Cross-lingual Transfermentioning
confidence: 99%