2011 International Conference on Computer Vision 2011
DOI: 10.1109/iccv.2011.6126329
|View full text |Cite
|
Sign up to set email alerts
|

2D-3D fusion for layer decomposition of urban facades

Abstract: We present a method for fusing two acquisition modes, 2D photographs and 3D LiDAR scans, for depth-layer decomposition of urban facades. The two modes have complementary characteristics: point cloud scans are coherent and inherently 3D, but are often sparse, noisy, and incomplete; photographs, on the other hand, are of high resolution, easy to acquire, and dense, but view-dependent and inherently 2D, lacking critical depth information. In this paper we use photographs to enhance the acquired LiDAR data. Our ke… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
3
1
1

Citation Types

0
37
0

Year Published

2012
2012
2019
2019

Publication Types

Select...
4
4
2

Relationship

2
8

Authors

Journals

citations
Cited by 65 publications
(39 citation statements)
references
References 29 publications
0
37
0
Order By: Relevance
“…Large holes in the background layer, caused by occlusions from foreground layer objects, are then filled by planar or horizontal interpolation. However, such an approach may result in false features in case of insufficient repetitions or lack of symmetry [17]. In our work, we aim to resolve this problem by using a multi-sessional approach in which multiple scans of the same environment obtained on different days and at different times of the day are matched and used to complete the occluded regions.…”
Section: Related Workmentioning
confidence: 99%
“…Large holes in the background layer, caused by occlusions from foreground layer objects, are then filled by planar or horizontal interpolation. However, such an approach may result in false features in case of insufficient repetitions or lack of symmetry [17]. In our work, we aim to resolve this problem by using a multi-sessional approach in which multiple scans of the same environment obtained on different days and at different times of the day are matched and used to complete the occluded regions.…”
Section: Related Workmentioning
confidence: 99%
“…Leveraging databases of millions of images [4] completes scenes by finding semantically similar pictures. Fusing both 2D and 3D images [6] computes a layer decomposition from a registration between the 2D and 3D data sets which allows for outlier removal and geometry propagation. [14] introduces a context sensitive surface completion.…”
Section: Related Workmentioning
confidence: 99%
“…Further, if the SfM outputs have gross errors due to ambiguity across repeated elements (see Figure 1), subsequent analysis of the 3D point sets can only extract wrong constraints. Other possibilities involve integrating information from other acquisition modes (e.g., LiDAR scans as used by Zheng et al [2010] and Li et al [2011]) or allowing the user to indicate symmetry structures on 3D data (e.g., [Nan et al 2010]) (see the recent survey of Mitra et al [? ] for a more detailed discussion on the use of symmetry priors in architecture modeling).…”
Section: Related Workmentioning
confidence: 99%