2023
DOI: 10.1016/j.knosys.2023.110381
|View full text |Cite
|
Sign up to set email alerts
|

CLSEP: Contrastive learning of sentence embedding with prompt

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1

Citation Types

0
2
0

Year Published

2023
2023
2024
2024

Publication Types

Select...
7
1
1

Relationship

1
8

Authors

Journals

citations
Cited by 16 publications
(4 citation statements)
references
References 13 publications
0
2
0
Order By: Relevance
“…DiffCSE [27] combines contrastive loss for data augmentation of insensitive examples and replacement detection loss for sensitive examples, resulting in improved sentence embedding. CLSEP [28] introduces a novel data augmentation strategy called partial word embedding augmentation (PWVA) for text data, which enhances data in the word embedding space while preserving more semantic information, leading to better sentence embedding.…”
Section: Related Work a Learning For Sentence Embeddingmentioning
confidence: 99%
“…DiffCSE [27] combines contrastive loss for data augmentation of insensitive examples and replacement detection loss for sensitive examples, resulting in improved sentence embedding. CLSEP [28] introduces a novel data augmentation strategy called partial word embedding augmentation (PWVA) for text data, which enhances data in the word embedding space while preserving more semantic information, leading to better sentence embedding.…”
Section: Related Work a Learning For Sentence Embeddingmentioning
confidence: 99%
“…Contrastive learning obtains promising results in representation learning [14,17,22]. Generally, a Siamese network is used to construct the contrastive framework and conduct contrastive training [14].…”
Section: Representation Learning Based On Contrastive Learningmentioning
confidence: 99%
“…The assessment of PLM-based sentence representation relies on two crucial characteristics: generalization and robustness. While considerable research efforts have been dedicated to developing universal sentence embeddings using PLMs (Reimers and Gurevych, 2019;Zhang et al, 2020;Ni et al, 2022;Neelakantan et al, 2022;Wang et al, 2023;Bölücü et al, 2023), it is worth noting that despite their promising performance across various downstream classification tasks (Sun et al, 2019;, demonstrating proficiency in generalization, these representations exhibit limitations in terms of robustness in adversarial settings and are vulnerable to diverse adversarial attacks (Nie et al, 2020;. Existing research (Garg and Ramakrishnan, 2020;Hauser et al, 2023) highlights the poor robustness of these representations, such as BERTbased representations, which can be deceived by replacing a few words in the input sentence.…”
Section: Introductionmentioning
confidence: 99%