2022
DOI: 10.48550/arxiv.2204.00511
|View full text |Cite
Preprint
|
Sign up to set email alerts
|

Learning Disentangled Representations of Negation and Uncertainty

Abstract: Negation and uncertainty modeling are longstanding tasks in natural language processing. Linguistic theory postulates that expressions of negation and uncertainty are semantically independent from each other and the content they modify. However, previous works on representation learning do not explicitly model this independence. We therefore attempt to disentangle the representations of negation, uncertainty, and content using a Variational Autoencoder 1 . We find that simply supervising the latent representat… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...

Citation Types

0
0
0

Publication Types

Select...

Relationship

0
0

Authors

Journals

citations
Cited by 0 publications
references
References 24 publications
0
0
0
Order By: Relevance

No citations

Set email alert for when this publication receives citations?