2018
DOI: 10.1007/978-3-030-01240-3_43
|View full text |Cite
|
Sign up to set email alerts
|

PSDF Fusion: Probabilistic Signed Distance Function for On-the-fly 3D Data Fusion and Scene Reconstruction

Abstract: We propose a novel 3D spatial representation for data fusion and scene reconstruction. Probabilistic Signed Distance Function (Probabilistic SDF, PSDF) is proposed to depict uncertainties in the 3D space. It is modeled by a joint distribution describing SDF value and its inlier probability, reflecting input data quality and surface geometry. A hybrid data structure involving voxel, surfel, and mesh is designed to fully exploit the advantages of various prevalent 3D representations. Connected by PSDF, these com… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
2
1

Citation Types

0
33
0

Year Published

2018
2018
2024
2024

Publication Types

Select...
5
2
1
1

Relationship

0
9

Authors

Journals

citations
Cited by 42 publications
(33 citation statements)
references
References 31 publications
0
33
0
Order By: Relevance
“…Existing state-of-the-art methods can be classified into two categories: online reconstructions [34]- [38], i.e., dense simultaneous localization and mapping (SLAM); and offline reconstructions [39]- [43], which have higher accuracy. To obtain high-quality 3D reconstruction, Bundlefusion [38] uses additional color features for registration and global bundle adjustment for obtaining precise scene geometry; Choi et al [39] and Zeng et al [40] reconstructed local smooth scene fragments and globally aligned them together with 3D features.…”
Section: Related Work a Rgb-d Scene 3d Reconstructionmentioning
confidence: 99%
“…Existing state-of-the-art methods can be classified into two categories: online reconstructions [34]- [38], i.e., dense simultaneous localization and mapping (SLAM); and offline reconstructions [39]- [43], which have higher accuracy. To obtain high-quality 3D reconstruction, Bundlefusion [38] uses additional color features for registration and global bundle adjustment for obtaining precise scene geometry; Choi et al [39] and Zeng et al [40] reconstructed local smooth scene fragments and globally aligned them together with 3D features.…”
Section: Related Work a Rgb-d Scene 3d Reconstructionmentioning
confidence: 99%
“…A similar model with longrange ray-based visibility constraints was used in [47,46], although these methods are not real-time capable. Recently, PSDF Fusion [15] demonstrated a combination of probabilistic modeling and a TSDF scene representation. However, they also assume a Gaussian error distribution of the input depth values.…”
Section: Related Workmentioning
confidence: 99%
“…With the advent of Kinectstyle active depth sensors, the KinectFusion [1] algorithm permits dense volumetric reconstruction of the scene in realtime, enabling mesh models output for physics-based augmented reality (AR) [24] and 3D printing [25]. Improved frameworks have then been proposed in the aspects of memory efficiency [26]- [28], large space representation [8], [11], [27], [29], camera trajectory accuracy with loop closure detection and optimization [12], [27], [30], and scene representation such as surfels [31] or hybrid data structure [32].…”
Section: Related Workmentioning
confidence: 99%