Proceedings of the 2022 Conference of the North American Chapter of the Association for Computational Linguistics: Human Langua 2022
DOI: 10.18653/v1/2022.naacl-main.311
|View full text |Cite
|
Sign up to set email alerts
|

DiffCSE: Difference-based Contrastive Learning for Sentence Embeddings

Abstract: We propose DiffCSE, an unsupervised contrastive learning framework for learning sentence embeddings. DiffCSE learns sentence embeddings that are sensitive to the difference between the original sentence and an edited sentence, where the edited sentence is obtained by stochastically masking out the original sentence and then sampling from a masked language model. We show that DiffSCE is an instance of equivariant contrastive learning (Dangovski et al., 2021), which generalizes contrastive learning and learns re… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
3
1

Citation Types

0
28
0

Year Published

2022
2022
2024
2024

Publication Types

Select...
3
2
1

Relationship

0
6

Authors

Journals

citations
Cited by 50 publications
(47 citation statements)
references
References 0 publications
0
28
0
Order By: Relevance
“…However, we failed to reproduce the results of [16]. We build the supervised version of DiffCSE using the same objectives by ourselves because Chuang et al [4] does not consider supervise settings in their work. Although D2CSE outperforms DiffCSE, both methods integrating equivariant contrastive learning tend to achieve lower performance than original contrastive learning.…”
Section: Supervised Methods Resultsmentioning
confidence: 96%
See 3 more Smart Citations
“…However, we failed to reproduce the results of [16]. We build the supervised version of DiffCSE using the same objectives by ourselves because Chuang et al [4] does not consider supervise settings in their work. Although D2CSE outperforms DiffCSE, both methods integrating equivariant contrastive learning tend to achieve lower performance than original contrastive learning.…”
Section: Supervised Methods Resultsmentioning
confidence: 96%
“…Gao et al [11] propose SimCSE that uses dropout to generate randomly perturbed sentence vectors for the same textual input. Numerous approaches [4,16,29,32,35,36] have followed SimCSE successfully.…”
Section: Introductionmentioning
confidence: 99%
See 2 more Smart Citations
“…In contrast, our method computes the MI on attention. Other related works include (Chuang et al, 2022;Liu et al, 2022).…”
Section: Introductionmentioning
confidence: 99%