2019 IEEE/CVF International Conference on Computer Vision Workshop (ICCVW) 2019
DOI: 10.1109/iccvw.2019.00264
|View full text |Cite
|
Sign up to set email alerts
|

Learned Semantic Multi-Sensor Depth Map Fusion

Abstract: Volumetric depth map fusion based on truncated signed distance functions has become a standard method and is used in many 3D reconstruction pipelines. In this paper, we are generalizing this classic method in multiple ways: 1) Semantics: Semantic information enriches the scene representation and is incorporated into the fusion process. 2) Multi-Sensor: Depth information can originate from different sensors or algorithms with very different noise and outlier statistics which are considered during data fusion. 3… Show more

Help me understand this report
View preprint versions

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1

Citation Types

0
3
0

Year Published

2021
2021
2023
2023

Publication Types

Select...
3
2
1

Relationship

0
6

Authors

Journals

citations
Cited by 7 publications
(3 citation statements)
references
References 51 publications
0
3
0
Order By: Relevance
“…Although it provides a real-time and loop-closure solution for indoor semantic interactive system, this work focuses on small-scale structured scenarios. Rozumnyi et al [26] proposed a deep fusion method based on machine learning, which integrated semantic 3D reconstruction, scene construction and multi-sensor data into a learning-based framework. This method automatically extracts sensor parameters and scene attributes parameters from training data and representing them in the form of confidence values to achieve semantic mapping, which only requires a small amount of training data to obtain better generalization ability.…”
Section: Semantic Mapping In Roboticsmentioning
confidence: 99%
“…Although it provides a real-time and loop-closure solution for indoor semantic interactive system, this work focuses on small-scale structured scenarios. Rozumnyi et al [26] proposed a deep fusion method based on machine learning, which integrated semantic 3D reconstruction, scene construction and multi-sensor data into a learning-based framework. This method automatically extracts sensor parameters and scene attributes parameters from training data and representing them in the form of confidence values to achieve semantic mapping, which only requires a small amount of training data to obtain better generalization ability.…”
Section: Semantic Mapping In Roboticsmentioning
confidence: 99%
“…Yet, these methods study specific sensors and are not inherently equipped with 3D reasoning. Few works consider 3D reconstruction with multiple sensors [57,31,7,76,77,24], but these do not consider the online mapping setting. Conceptually, more closely related to our work is SenFu-Net [58], which is an online mapping method for multisensor depth fusion.…”
Section: Multi-sensor Depth Fusionmentioning
confidence: 99%
“…Hence, they do not account for the fact that, when observing from different viewpoints, depth measurements may present different levels of noise and outliers. To tackle this limitation, [13] and [14] proposed end-to-end learningbased approaches for volumetric depth map fusion. These methods are able to handle sensor noise and outliers, but require a large amount of training data and are relatively prone to over-fitting to a particular sensor or dataset.…”
Section: Related Workmentioning
confidence: 99%