2018
DOI: 10.48550/arxiv.1801.04011
|View full text |Cite
Preprint
|
Sign up to set email alerts
|

Enhancing Underwater Imagery using Generative Adversarial Networks

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1

Citation Types

0
24
0

Year Published

2018
2018
2023
2023

Publication Types

Select...
4
2

Relationship

2
4

Authors

Journals

citations
Cited by 7 publications
(24 citation statements)
references
References 0 publications
0
24
0
Order By: Relevance
“…There exist a number of hand gesturebased HRI frameworks [17], [18], [19] for terrestrial robots. In addition, recent visual hand gesture recognition techniques [26], [27] based on CNNs have been shown to be highly accurate and robust to noise and visual distortions [8]. A number of such visual recognition and tracking techniques have been successfully used for underwater tracking [16] and have proven to be more robust than other purely featurebased methods [3].…”
Section: B Underwater Human-robot Communicationmentioning
confidence: 99%
See 1 more Smart Citation
“…There exist a number of hand gesturebased HRI frameworks [17], [18], [19] for terrestrial robots. In addition, recent visual hand gesture recognition techniques [26], [27] based on CNNs have been shown to be highly accurate and robust to noise and visual distortions [8]. A number of such visual recognition and tracking techniques have been successfully used for underwater tracking [16] and have proven to be more robust than other purely featurebased methods [3].…”
Section: B Underwater Human-robot Communicationmentioning
confidence: 99%
“…The trained model is invariant to the scale and appearance of divers (e.g., the color of the suit/flippers, swimming directions, etc.) and robust to noise and image distortions [8].…”
Section: Introductionmentioning
confidence: 99%
“…Recently, Underwater Generative adversarial network (UGAN) [12] is proposed to improve the underwater image quality. For discriminator, UGAN chose WGAN-GP (Wasserstein GAN with gradient penalty) [14] to enforces the soft constraint on the output concerning its input via the Lipschitz on the gradients norms instead of clipping the gradients in some range.…”
Section: Uganmentioning
confidence: 99%
“…The generator is motivated by CycleGAN [66], comparable to the encoder-decoder network of UNET [48]. The encoder of UGAN [12] is composed of convolutional layers having filter sizes of 4 × 4 with a stride of two followed by batch normalization [22] and leaky ReLU (slope of 0.2). Similarly, the decoder portion consists of deconvolutional layers followed by ReLU [42] only except the last layer where TanH is used to restrict distribution between -1 and 1.…”
Section: Uganmentioning
confidence: 99%
“…These problems are exacerbated underwater since both the robot and diver are suspended in a six-degrees-of-freedom (6DOF) environment. Consequently, classical model-based detection algorithms fail to achieve good generalization performance [3,4]. On the other hand, model-free algorithms incur significant target drift [5] under such noisy conditions.…”
mentioning
confidence: 99%