The platform will undergo maintenance on Sep 14 at about 7:45 AM EST and will be unavailable for approximately 2 hours.
2021
DOI: 10.1016/j.rio.2021.100104
|View full text |Cite
|
Sign up to set email alerts
|

hNet: Single-shot 3D shape reconstruction using structured light and h-shaped global guidance network

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
4
1

Citation Types

0
14
0

Year Published

2021
2021
2024
2024

Publication Types

Select...
6
1
1

Relationship

1
7

Authors

Journals

citations
Cited by 20 publications
(16 citation statements)
references
References 45 publications
0
14
0
Order By: Relevance
“…To validate the capability of MF-Dnet in depth acquisition tasks, comparative experiments were conducted. In these experiments, several end-to-end methods such as autoencoder network (AEN) [23], UNet [24], hNet [26], and generative adversarial network (GAN) [27] were compared with the proposed method. To provide a comprehensive evaluation of each algorithm, qualitative analysis and quantitative assessment were performed.…”
Section: Compared With Existing Networkmentioning
confidence: 99%
See 1 more Smart Citation
“…To validate the capability of MF-Dnet in depth acquisition tasks, comparative experiments were conducted. In these experiments, several end-to-end methods such as autoencoder network (AEN) [23], UNet [24], hNet [26], and generative adversarial network (GAN) [27] were compared with the proposed method. To provide a comprehensive evaluation of each algorithm, qualitative analysis and quantitative assessment were performed.…”
Section: Compared With Existing Networkmentioning
confidence: 99%
“…Machineni et al [25] proposed a paradigm shift by introducing an end-to-end deep learning-based framework for FPP that does not need any frequency domain filtering and phase unwrapping. Nguyen et al [26] made improvements to the UNet architecture and introduced hNet for fringe-to-depth learning. Wang et al [27] used computer graphics (CG) for data generation and fed the generated data into the pix2pix network to achieve the function of fringe learning deep information.…”
Section: Introductionmentioning
confidence: 99%
“…In recent years, many scholars have been inspired by deep learning techniques and started to apply them to the field of 3D imaging. Some scholars have tried to use deformed fringe images to map depth directly [15][16][17][18][19][20] . Using a large number of fringe patterns to train different networks, the researchers found that U-Net 19 performed better at predicting depth than other convolutional neural networks and the Generative adversarial network (GAN) 20 .…”
Section: Introductionmentioning
confidence: 99%
“…In practical measurement, the acquisition of multi-frequency or single-frequency fringe images is time-consuming, and the reconstruction accuracy of single-shot fringe prediction depth methods is low. Some researchers 16,17,25,27 have provided datasets for testing, but a large number of datasets need to be recreated for different models and different situations. For this reason, WANG 18 detailed the creation of a simulated FPP system for batch datasets preparation using 3D modelling software combined with computer graphics.…”
Section: Introductionmentioning
confidence: 99%
“…As a matter of fact, a few strategies have been proposed to transform a captured structured-light image into its corresponding 3D shape using deep learning. For instance, an autoencoder-based network named UNet can serve as an end-to-end network to acquire the depth map from a single structured-light image [28][29][30][31]. Works presented in [32][33][34][35][36] reveal that a phase map can be retrieved by one or multiple neural networks from structured-light images, and the phase map is then used to calculate the depth map.…”
Section: Introductionmentioning
confidence: 99%