2021 IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops (CVPRW) 2021
DOI: 10.1109/cvprw53098.2021.00084
|View full text |Cite
|
Sign up to set email alerts
|

Dual Contrastive Learning for Unsupervised Image-to-Image Translation

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
2
1

Citation Types

0
136
0

Year Published

2021
2021
2024
2024

Publication Types

Select...
4
3
2

Relationship

1
8

Authors

Journals

citations
Cited by 147 publications
(157 citation statements)
references
References 25 publications
0
136
0
Order By: Relevance
“…Domain adaptation is one type of transfer learning and has been well explored in a centralized setting where both the source domain and target domain data are available for performing knowledge transfer. In this regard, a reconstruction-based method with an encoder-decoder architecture aims to learn a discriminative mapping of target samples to the source feature space thus improving generalization performance Tzeng et al (2017); Ghifary et al (2016); Han et al (2021). However, the generative approach is generally resource-consuming relaying on computational capability.…”
Section: Related Workmentioning
confidence: 99%
“…Domain adaptation is one type of transfer learning and has been well explored in a centralized setting where both the source domain and target domain data are available for performing knowledge transfer. In this regard, a reconstruction-based method with an encoder-decoder architecture aims to learn a discriminative mapping of target samples to the source feature space thus improving generalization performance Tzeng et al (2017); Ghifary et al (2016); Han et al (2021). However, the generative approach is generally resource-consuming relaying on computational capability.…”
Section: Related Workmentioning
confidence: 99%
“…It aims to maximizes the mutual information by pulling the anchor close to positive samples while pushing it away from negative samples in the representation space. Recent studies have employed contrastive learning into low-level vision tasks and obtained improved performance, such as image dehazing [48], image deraining [7], image super-resolution [44] and image-to-image translation [35,14]. The most critical design in contrastive learning is how to select the negatives.…”
Section: Contrastive Learningmentioning
confidence: 99%
“…We focus on unsupervised image-to-image translation models to fully exploit HICRD. We compare CWR to several state-of-the-art baselines from different views, including image-to-image translation approaches (CUT [43], CycleGAN [44] and DCLGAN [49]), conventional underwater image enhancement methods (Histogramprior [50], Retinex [51] and Fusion [52]), conventional underwater image restoration methods (UDCP [9], DCP [8], IBLA [13], and Haze-line [15]), and learning-based restoration method (UWCNN [20]). We use the pre-trained UWCNN model with water type-3, which is close to our dataset.…”
Section: A Baselinesmentioning
confidence: 99%
“…For image-to-image translation approaches, CUT [43] and DCLGAN [49] aim to maximize the mutual information between corresponding patches of the input and the output. DCLGAN [49] employs a dual learning setting and assigns different encoders to different domains, to gain a better performance. We firstly employ CUT and DCLGAN to enable the underwater image restoration task.…”
Section: A Baselinesmentioning
confidence: 99%