2020
DOI: 10.1007/978-3-030-58568-6_8
|View full text |Cite
|
Sign up to set email alerts
|

Hard Negative Examples are Hard, but Useful

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1

Citation Types

0
32
0

Year Published

2021
2021
2022
2022

Publication Types

Select...
5
3
1

Relationship

0
9

Authors

Journals

citations
Cited by 66 publications
(32 citation statements)
references
References 19 publications
0
32
0
Order By: Relevance
“…As with previous studies [1,14,24], CL benefits from hard negative samples, i.e. samples close to the anchor node such that cannot be distinguished easily.…”
Section: Structure-aware Hard Negative Miningmentioning
confidence: 75%
See 1 more Smart Citation
“…As with previous studies [1,14,24], CL benefits from hard negative samples, i.e. samples close to the anchor node such that cannot be distinguished easily.…”
Section: Structure-aware Hard Negative Miningmentioning
confidence: 75%
“…However, the previous scheme assumes that all negative samples make equal contribution to the CL objective. Previous research in metric learning [14] and visual representation learning [1,24] has established that the hard negative sample is of particular concern in effective CL. To be specific, the more similar a negative sample to its anchor, the more helpful it is for learning effective representatives.…”
Section: Introductionmentioning
confidence: 99%
“…To excavate effective negative samples, these methods heavily depend on the large batch sizes [7] or memory bank [18]. Utilizing hard negative samples has long been recognized as an effective approach to boost model performance [17,29,54,45]. In the contrastive learning studies, [10,43] modify the contrastive learning loss to make it assign greater weights to the hard negative samples.…”
Section: Related Workmentioning
confidence: 99%
“…Such losses distinguish between similar/positive samples and dissimilar/negative samples in a batch during training. However, even these most simple losses are often considered difficult to optimize, and practitioners often rely on various training methods which can be useful in practice but interact in an unclear way with the optimization process: for example negative mining [42,52,55,56], memory banks [24,32,45,51,53,54], and positiveonly formulations [12,21,46].…”
Section: Introductionmentioning
confidence: 99%