2023
DOI: 10.1007/s13042-023-01823-8
|View full text |Cite
|
Sign up to set email alerts
|

Text semantic matching with an enhanced sample building method based on contrastive learning

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1

Citation Types

0
1
0

Year Published

2023
2023
2024
2024

Publication Types

Select...
3
1

Relationship

0
4

Authors

Journals

citations
Cited by 4 publications
(1 citation statement)
references
References 9 publications
0
1
0
Order By: Relevance
“…Recently, researchers have achieved great advances in sentence representation based on contrastive learning with pre-trained language models [6][7][8][9][10]. On the one hand, the largescale pre-trained language models (PLMs), typified by BERT [11], are trained with unlabeled data, improving the state-of-the-art results in most downstream tasks.…”
Section: Introductionmentioning
confidence: 99%
“…Recently, researchers have achieved great advances in sentence representation based on contrastive learning with pre-trained language models [6][7][8][9][10]. On the one hand, the largescale pre-trained language models (PLMs), typified by BERT [11], are trained with unlabeled data, improving the state-of-the-art results in most downstream tasks.…”
Section: Introductionmentioning
confidence: 99%