2021
DOI: 10.48550/arxiv.2112.04323
|View full text |Cite
Preprint
|
Sign up to set email alerts
|

Contrastive Learning with Large Memory Bank and Negative Embedding Subtraction for Accurate Copy Detection

Abstract: Copy detection, which is a task to determine whether an image is a modified copy of any image in a database, is an unsolved problem. Thus, we addressed copy detection by training convolutional neural networks (CNNs) with contrastive learning. Training with a large memory-bank and hard data augmentation enables the CNNs to obtain more discriminative representation. Our proposed negative embedding subtraction further boosts the copy detection accuracy. Using our methods, we achieved 1st place in the Facebook AI … Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1

Citation Types

0
12
0

Year Published

2023
2023
2024
2024

Publication Types

Select...
3
1
1

Relationship

0
5

Authors

Journals

citations
Cited by 5 publications
(12 citation statements)
references
References 15 publications
0
12
0
Order By: Relevance
“…Out of the 100 most similar pairs for each metric, the NPCK recovers a far higher fraction of near-duplicates (Figure 16 and Figure 17). Only a specially trained duplicate detection model (Yokoo, 2021) performs comparably.…”
Section: Neural Posterior Correlation Kernelmentioning
confidence: 99%
See 2 more Smart Citations
“…Out of the 100 most similar pairs for each metric, the NPCK recovers a far higher fraction of near-duplicates (Figure 16 and Figure 17). Only a specially trained duplicate detection model (Yokoo, 2021) performs comparably.…”
Section: Neural Posterior Correlation Kernelmentioning
confidence: 99%
“…We report two-sided p-values. (Douze et al, 2021) descriptor track winner (Yokoo, 2021). The ResNet penultimate features and CLIP embeddings both match many pairs which are not visually similar.…”
Section: B3 Theoremmentioning
confidence: 99%
See 1 more Smart Citation
“…Another landmark work, MoCo [11], improves upon the InfoNCE work through the use of a momentum contrast mechanism that improves convergence. [43] further advocates to train convolutional neural networks with contrastive learning and hard data augmentation, trying to explore more discriminative representations. Both [5] and [43] employ a similar strategy of data augmentation and take an augmented image as a positive (similar) sample.…”
Section: Contrastive Learningmentioning
confidence: 99%
“…[43] further advocates to train convolutional neural networks with contrastive learning and hard data augmentation, trying to explore more discriminative representations. Both [5] and [43] employ a similar strategy of data augmentation and take an augmented image as a positive (similar) sample. Such data augmentations include cropping, grayscale, blocking part of the picture, and horizontal flipping, which are quite similar to the strategies adopted in video infringement.…”
Section: Contrastive Learningmentioning
confidence: 99%