Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing 2018
DOI: 10.18653/v1/d18-1497
|View full text |Cite
|
Sign up to set email alerts
|

Learning Disentangled Representations of Texts with Application to Biomedical Abstracts

Abstract: We propose a method for learning disentangled representations of texts that code for distinct and complementary aspects, with the aim of affording efficient model transfer and interpretability. To induce disentangled embeddings, we propose an adversarial objective based on the (dis)similarity between triplets of documents with respect to specific aspects. Our motivating application is embedding biomedical abstracts describing clinical trials in a manner that disentangles the populations, interventions, and out… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
2
1

Citation Types

0
15
0

Year Published

2019
2019
2023
2023

Publication Types

Select...
6
3

Relationship

1
8

Authors

Journals

citations
Cited by 19 publications
(17 citation statements)
references
References 27 publications
0
15
0
Order By: Relevance
“…Their results suggest that, in absence of a suitable supervision signal, highquality factors can only be learned in the presence of a strong inductive bias. Going beyond unsupervised approaches, (Jain et al, 2018) propose a supervised approach for DRL for text. As supervision signal, they use triplets of the form (s, d, o) a which encode that relative to aspect a, it holds that s and d are more similar than d and o.…”
Section: Disentangled Representation Learning (Drl)mentioning
confidence: 99%
“…Their results suggest that, in absence of a suitable supervision signal, highquality factors can only be learned in the presence of a strong inductive bias. Going beyond unsupervised approaches, (Jain et al, 2018) propose a supervised approach for DRL for text. As supervision signal, they use triplets of the form (s, d, o) a which encode that relative to aspect a, it holds that s and d are more similar than d and o.…”
Section: Disentangled Representation Learning (Drl)mentioning
confidence: 99%
“…Drawing inspiration from the vision community, learning disentangled representations has also been investigated in areas such as natural language processing and graph analysis. (Jain et al, 2018) propose an autoencoders architecture to disentangle the populations, interventions, and outcomes in biomedical texts. propose a prism module for semantic disentanglement in named entity recognition.…”
Section: Learning Disentangled Representationsmentioning
confidence: 99%
“…For instance, several authors have focused on separating style (or sentiment) from content (John et al, 2019). In general, most existing approaches for text use some kind of supervision signal, such as aspect-specific similarity judgements (Jain et al, 2018) or sentiment labels (He et al, 2017).…”
Section: Related Workmentioning
confidence: 99%