2023
DOI: 10.3390/rs15051264
|View full text |Cite
|
Sign up to set email alerts
|

Multi-Sensor Data Fusion for 3D Reconstruction of Complex Structures: A Case Study on a Real High Formwork Project

Abstract: As the most comprehensive document types for the recording and display of real-world information regarding construction projects, 3D realistic models are capable of recording and displaying simultaneously textures and geometric shapes in the same 3D scene. However, at present, the documentation for much of construction infrastructure faces significant challenges. Based on TLS, GNSS/IMU, mature photogrammetry, a UAV platform, computer vision technologies, and AI algorithms, this study proposes a workflow for 3D… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

0
1
0

Year Published

2023
2023
2024
2024

Publication Types

Select...
4
3
1

Relationship

0
8

Authors

Journals

citations
Cited by 9 publications
(6 citation statements)
references
References 93 publications
(116 reference statements)
0
1
0
Order By: Relevance
“…During the algorithmic research, after replacing the FPN feature extraction network model using the AFPN combined with the ASFF feature extraction network model, considering updating the FMT feature matching module of the original network, after checking the related literature, we found that the local feature transform LoFTR [43] network model can satisfy our needs, and this network model also uses the self-and cross-attention layer. This network model also uses a Transformer with self-and cross-attention layers to handle the dense local features extracted from the backbone network.…”
Section: Constraints or Challengesmentioning
confidence: 98%
“…During the algorithmic research, after replacing the FPN feature extraction network model using the AFPN combined with the ASFF feature extraction network model, considering updating the FMT feature matching module of the original network, after checking the related literature, we found that the local feature transform LoFTR [43] network model can satisfy our needs, and this network model also uses the self-and cross-attention layer. This network model also uses a Transformer with self-and cross-attention layers to handle the dense local features extracted from the backbone network.…”
Section: Constraints or Challengesmentioning
confidence: 98%
“…The reconstruction technique based on RGB images generally involves employing a multi-source image fusion approach to acquire the target's geometric shape and texture information [39]. Its procedure typically entails the following steps: (1) capture a set of two-dimensional images containing the target object from various angles; (2) identify and extract features such as key points, corners, edges, etc.…”
Section: D Reconstruction Of Livestock Based On Rgb Imagesmentioning
confidence: 99%
“…where 1 f is a standard 1D convolution operation with a kernel size of one and stride of one. The convolution operation does not alter the size of the feature map; however, the number of channels changes from 3 to 256.…”
Section: Point Cloud Modulementioning
confidence: 99%
“…Its primary objective is to restore the 3D structure of a target object or scene. The continuous evolution of sensor technologies has made it possible to obtain high-quality and accurate point cloud and image data [1]. However, due to the rapid development of digital information and the increasing complexity of neural networks, relying solely on a single sensor is insufficient for capturing the rich features of target objects.…”
Section: Introductionmentioning
confidence: 99%