Proceedings of the 52nd Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers) 2014
DOI: 10.3115/v1/p14-1058
|View full text |Cite
|
Sign up to set email alerts
|

Learning to Predict Distributions of Words Across Domains

Abstract: Although the distributional hypothesis has been applied successfully in many natural language processing tasks, systems using distributional information have been limited to a single domain because the distribution of a word can vary between domains as the word's predominant meaning changes. However, if it were possible to predict how the distribution of a word changes from one domain to another, the predictions could be used to adapt a system trained in one domain to work in another. We propose an unsupervise… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
3
2

Citation Types

0
12
0

Year Published

2015
2015
2022
2022

Publication Types

Select...
4
3
1

Relationship

3
5

Authors

Journals

citations
Cited by 15 publications
(12 citation statements)
references
References 23 publications
0
12
0
Order By: Relevance
“…Subword approaches, on the other hand, are often more compositional and flexible, and we leave the extension of our method to handle subword information to future work. Our work is also related to some methods in domain adaptation and multi-lingual correlation, such as that of Bollegala et al (2014).…”
Section: Related Workmentioning
confidence: 99%
“…Subword approaches, on the other hand, are often more compositional and flexible, and we leave the extension of our method to handle subword information to future work. Our work is also related to some methods in domain adaptation and multi-lingual correlation, such as that of Bollegala et al (2014).…”
Section: Related Workmentioning
confidence: 99%
“…In contrast, in the movie domain, the word "lightweight" usually connotes a negative opinion describing movies that do not invoke deep thoughts among the audience. This observation motivates the study of learning domainsensitive word representations Bollegala et al, 2015Bollegala et al, , 2014. They basically learn separate embeddings of the same word for different domains.…”
Section: Introductionmentioning
confidence: 93%
“…However, as described in Section 1, the meaning of a word vary from one domain to another, and must be considered. To the best of our knowledge, the only prior work studying the problem of word representation variation across domains is due to Bollegala et al (2014). Given a source and a target domain, they first select a set of pivots using pointwise mutual information, and create two distributional representa-tions for each pivot using their co-occurrence contexts in a particular domain.…”
Section: Related Workmentioning
confidence: 99%
“…For exam-ple, the phrase lightweight is often used in a positive sentiment in the portable electronics domain because a lightweight device is easier to carry around, which is a positive attribute for a portable electronic device. However, the same phrase has a negative sentiment assocition in the movie domain because movies that do not invoke deep thoughts in viewers are considered to be lightweight (Bollegala et al, 2014). However, existing word representation learning methods are agnostic to such domain-specific semantic variations of words, and capture semantics of words only within a single domain.…”
Section: Introductionmentioning
confidence: 99%