2020
DOI: 10.1109/tvcg.2019.2919619
|View full text |Cite
|
Sign up to set email alerts
|

HeteroFusion: Dense Scene Reconstruction Integrating Multi-Sensors

Abstract: We present a novel approach to integrate data from multiple sensor types for dense 3D reconstruction of indoor scenes in realtime. Existing algorithms are mainly based on a single RGBD camera and thus require continuous scanning of areas with sufficient geometric features. Otherwise, tracking may fail due to unreliable frame registration. Inspired by the fact that the fusion of multiple sensors can combine their strengths towards a more robust and accurate self-localization, we incorporate multiple types of se… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
3
1
1

Citation Types

0
5
0

Year Published

2020
2020
2023
2023

Publication Types

Select...
4
2
1
1

Relationship

1
7

Authors

Journals

citations
Cited by 14 publications
(5 citation statements)
references
References 46 publications
(96 reference statements)
0
5
0
Order By: Relevance
“…Yet, these methods study specific sensors and are not inherently equipped with 3D reasoning. Few works consider 3D reconstruction with multiple sensors [57,31,7,76,77,24], but these do not consider the online mapping setting. Conceptually, more closely related to our work is SenFu-Net [58], which is an online mapping method for multisensor depth fusion.…”
Section: Multi-sensor Depth Fusionmentioning
confidence: 99%
“…Yet, these methods study specific sensors and are not inherently equipped with 3D reasoning. Few works consider 3D reconstruction with multiple sensors [57,31,7,76,77,24], but these do not consider the online mapping setting. Conceptually, more closely related to our work is SenFu-Net [58], which is an online mapping method for multisensor depth fusion.…”
Section: Multi-sensor Depth Fusionmentioning
confidence: 99%
“…Therefore, some processing is required to upscale the spatial resolution for the measured depth. This upscaling can be performed by tempo-spatial supersampling of the depth data, with Inertial Measurement Unit (IMU) and position-tracking [29]. However, this is only feasible for static scenes and moving sensors.…”
Section: Depth Upscalingmentioning
confidence: 99%
“…Panoramic images have a wider perspective than typical perspective images, contain more comprehensive buildings and a stronger sense of reality, and have richer scene details [3]. Owing to the abundant resources and wide field of view (FOV), panoramic images have considerable advantages over perspective images in large-scale, three-dimensional (3D) scene reconstruction [4,5]. Three-dimensional scene reconstruction [6][7][8][9] has been comprehensively studied and applied to digital cities, virtual reality [10], and SLAM [11,12] because of its significant advantages in terms of model accuracy, realism, and ease of modeling.…”
Section: Introductionmentioning
confidence: 99%