“…To validate the capability of MF-Dnet in depth acquisition tasks, comparative experiments were conducted. In these experiments, several end-to-end methods such as autoencoder network (AEN) [23], UNet [24], hNet [26], and generative adversarial network (GAN) [27] were compared with the proposed method. To provide a comprehensive evaluation of each algorithm, qualitative analysis and quantitative assessment were performed.…”
Section: Compared With Existing Networkmentioning
confidence: 99%
“…Machineni et al [25] proposed a paradigm shift by introducing an end-to-end deep learning-based framework for FPP that does not need any frequency domain filtering and phase unwrapping. Nguyen et al [26] made improvements to the UNet architecture and introduced hNet for fringe-to-depth learning. Wang et al [27] used computer graphics (CG) for data generation and fed the generated data into the pix2pix network to achieve the function of fringe learning deep information.…”
The end-to-end networks have been successfully applied in fringe projection profilometry in recent years for their high flexibility and fast speed. Most of them can predict the depth map from a single fringe. But the depth map inherits the fringe fluctuation and loses the local details of the measured object. To address this issue, an end-to-end network based on double spatially frequency fringes (dual-frequency based depth acquisition network) is proposed. To release the periodic error of the predicted depth map, a dual-branch structure is designed to learn the global contour and local details of the measured object from dual-frequency patterns. To fully exploit the contextual information of the fringe patterns, five novel modules are proposed to accomplish feature extraction, down-sampling/up-sampling, and information feeding. Ablation experiments verify the effectiveness of the presented modules. Competitive experiments demonstrate that the proposed lightweight network presents higher accuracy compared to the existing end-to-end learning algorithms. Noise immunity test and physical validation demonstrate the generalization of the network.
“…To validate the capability of MF-Dnet in depth acquisition tasks, comparative experiments were conducted. In these experiments, several end-to-end methods such as autoencoder network (AEN) [23], UNet [24], hNet [26], and generative adversarial network (GAN) [27] were compared with the proposed method. To provide a comprehensive evaluation of each algorithm, qualitative analysis and quantitative assessment were performed.…”
Section: Compared With Existing Networkmentioning
confidence: 99%
“…Machineni et al [25] proposed a paradigm shift by introducing an end-to-end deep learning-based framework for FPP that does not need any frequency domain filtering and phase unwrapping. Nguyen et al [26] made improvements to the UNet architecture and introduced hNet for fringe-to-depth learning. Wang et al [27] used computer graphics (CG) for data generation and fed the generated data into the pix2pix network to achieve the function of fringe learning deep information.…”
The end-to-end networks have been successfully applied in fringe projection profilometry in recent years for their high flexibility and fast speed. Most of them can predict the depth map from a single fringe. But the depth map inherits the fringe fluctuation and loses the local details of the measured object. To address this issue, an end-to-end network based on double spatially frequency fringes (dual-frequency based depth acquisition network) is proposed. To release the periodic error of the predicted depth map, a dual-branch structure is designed to learn the global contour and local details of the measured object from dual-frequency patterns. To fully exploit the contextual information of the fringe patterns, five novel modules are proposed to accomplish feature extraction, down-sampling/up-sampling, and information feeding. Ablation experiments verify the effectiveness of the presented modules. Competitive experiments demonstrate that the proposed lightweight network presents higher accuracy compared to the existing end-to-end learning algorithms. Noise immunity test and physical validation demonstrate the generalization of the network.
“…In recent years, many scholars have been inspired by deep learning techniques and started to apply them to the field of 3D imaging. Some scholars have tried to use deformed fringe images to map depth directly [15][16][17][18][19][20] . Using a large number of fringe patterns to train different networks, the researchers found that U-Net 19 performed better at predicting depth than other convolutional neural networks and the Generative adversarial network (GAN) 20 .…”
Section: Introductionmentioning
confidence: 99%
“…In practical measurement, the acquisition of multi-frequency or single-frequency fringe images is time-consuming, and the reconstruction accuracy of single-shot fringe prediction depth methods is low. Some researchers 16,17,25,27 have provided datasets for testing, but a large number of datasets need to be recreated for different models and different situations. For this reason, WANG 18 detailed the creation of a simulated FPP system for batch datasets preparation using 3D modelling software combined with computer graphics.…”
Due to the complicated realization process, the traditional three-dimensional (3D) reconstruction method of structured light gradually fails to meet the needs of actual production and complex scenes. The combination of fringe projection profilometry and deep learning effectively improves the situation. Classical neural network models have gradually shown their unique advantages in the field of 3D reconstruction. Although the existing reconstruction methods have been improved in different aspects, they still have the problems of complex data set production and low reconstruction accuracy, so they are difficult to be applied to the actual 3D measurement. On this basis, a virtual 3D measurement simulation system based on fringe projection profiling is built to generate batch training data, simplifying the actual data collection process. And used the traditional fringe projection profilometry to rebuild the model as the subsequent ground truth, to verify the effectiveness of the virtual data set. In this paper, the phase information is taken as the target, a multi-scale feature fusion convolution neural network is used to transform a single fringe image into multiple single frequency phase shift images, then the single frequency phase shift images used as input to get the fringe order. In this way, 3D reconstruction of complex objects can be realized, which simplifies the complicated calculation process of traditional methods. After a large number of experiments, the proposed method is proved to be more accurate and efficient than the existing methods.
“…As a matter of fact, a few strategies have been proposed to transform a captured structured-light image into its corresponding 3D shape using deep learning. For instance, an autoencoder-based network named UNet can serve as an end-to-end network to acquire the depth map from a single structured-light image [28][29][30][31]. Works presented in [32][33][34][35][36] reveal that a phase map can be retrieved by one or multiple neural networks from structured-light images, and the phase map is then used to calculate the depth map.…”
Accurate three-dimensional (3D) shape reconstruction of objects from a single image is a challenging task, yet it is highly demanded by numerous applications. This paper presents a novel 3D shape reconstruction technique integrating a high-accuracy structured-light method with a deep neural network learning scheme. The proposed approach employs a convolutional neural network (CNN) to transform a color structured-light fringe image into multiple triple-frequency phase-shifted grayscale fringe images, from which the 3D shape can be accurately reconstructed. The robustness of the proposed technique is verified, and it can be a promising 3D imaging tool in future scientific and industrial applications.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.