2022
DOI: 10.1016/j.patcog.2021.108324
|View full text |Cite
|
Sign up to set email alerts
|

Two-step domain adaptation for underwater image enhancement

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
3
2

Citation Types

0
15
0

Year Published

2022
2022
2023
2023

Publication Types

Select...
4
3
1

Relationship

0
8

Authors

Journals

citations
Cited by 88 publications
(21 citation statements)
references
References 21 publications
0
15
0
Order By: Relevance
“…Noting that the goal of all of these works is to learn a domain-agnostic latent, in other words, to minimize the discrepancy of latent encoded from synthesis and real domains, however, the latent space is hard to manipulate and interpret, in contrast, we aim to separate image to content and style latent and distinguish extracted style latent from different domains, and further find a meaningful latent space that can be manipulated. Jiang et al [16] proposed a two-step domain adaptation framework without synthesis data, they use CycleGAN for style transfer to remove color cast, which is similar with synthesizing underwater image (clean → underwater) in [10] but in an opposite direction (underwater → clean), and remove hazy effect in second step.…”
Section: B Domain Adaptation For Underwater Imagementioning
confidence: 99%
See 2 more Smart Citations
“…Noting that the goal of all of these works is to learn a domain-agnostic latent, in other words, to minimize the discrepancy of latent encoded from synthesis and real domains, however, the latent space is hard to manipulate and interpret, in contrast, we aim to separate image to content and style latent and distinguish extracted style latent from different domains, and further find a meaningful latent space that can be manipulated. Jiang et al [16] proposed a two-step domain adaptation framework without synthesis data, they use CycleGAN for style transfer to remove color cast, which is similar with synthesizing underwater image (clean → underwater) in [10] but in an opposite direction (underwater → clean), and remove hazy effect in second step.…”
Section: B Domain Adaptation For Underwater Imagementioning
confidence: 99%
“…The only restriction is images should at least contain underwater type degradation, which include color shift, low contrast, etc., but the degradation is not necessary to be realistic, since under the proposed framework, the style information would be separated from content and what we interested in are target domain (real-world underwater) style and clean high-quality image, and aim to build the pseudo real underwater image pair to learn the real-world underwater image enhancement. Noting that though the notion of performing domain adaptation via pseudo real-world image pair is similar with [15,16], we highlight two difference from [15,16]. First, the input of enhancement module is different, the prior works input the domain translation result, i.e.…”
Section: A Domain Adaptation and Image Enhancementmentioning
confidence: 99%
See 1 more Smart Citation
“…Noting that the goal of all of these works is to learn a domain-agnostic latent, in other words, to minimize the discrepancy of latent encoded from synthesis and real domains, in contrast, we aim to separate image to content and style latent and distinguish style latent from different sub-domains. Jiang et al [26] proposed a twostep domain adaptation framework without synthesis data, they use CycleGAN for style transfer to remove color cast, which is similar with synthesizing underwater image (clean → underwater) in [8] but in an opposite direction (underwater → clean), and remove hazy effect in second step.…”
Section: B Domain Adaptation For Underwater Imagementioning
confidence: 99%
“…However, these methods may not be e ective for all underwater images due to their diverse and complicated degradation types. Image restoration-based UIE methods [16][17][18][19][20][21][22][23][24][25][26][27][28][29][30][31] treat image quality amelioration as an inverse problem where the physical underwater optical model is utilized and diverse image priors are explored as constraints to estimate a global background light magnitude and a transmission matrix involved in the physical underwater optical model. However, one inevitable limitation of these prior-based restoration methods for UIE is that these priors are invalid to some specific underwater environments and/ or severe color casts.…”
Section: Introductionmentioning
confidence: 99%