2017
DOI: 10.1145/3130800.3130823
|View full text |Cite
|
Sign up to set email alerts
|

BigSUR

Abstract: Fig. 1. Structured Urban Reconstruction. Given street-level imagery, GIS footprints, and a coarse 3D mesh (left), we formulate a global optimization to automatically fuse these noisy, incomplete, and conflicting data sources to create building footprints (middle: colored horizontal polygons) with profiles (vertical ribbons shown for several footprints) and attached building façades (vertical rectangles). The output encodes a structured urban model (right) including the walls, roof, and associated building elem… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
3
1
1

Citation Types

0
21
0

Year Published

2018
2018
2024
2024

Publication Types

Select...
8
1

Relationship

0
9

Authors

Journals

citations
Cited by 62 publications
(22 citation statements)
references
References 69 publications
0
21
0
Order By: Relevance
“…Given the high-resolution mesh, we need to enforce a strong prior to ultimately synthesize a simplified structure. At a high-level, our approach is similar to urban scene modeling [35], but the essential prior being used is different. For urban buildings, they utilize structural clues from GIS footprints, satellite imagery, and semantic segmentation and infer combinations of sweep-edges.…”
Section: Plane Completionmentioning
confidence: 99%
“…Given the high-resolution mesh, we need to enforce a strong prior to ultimately synthesize a simplified structure. At a high-level, our approach is similar to urban scene modeling [35], but the essential prior being used is different. For urban buildings, they utilize structural clues from GIS footprints, satellite imagery, and semantic segmentation and infer combinations of sweep-edges.…”
Section: Plane Completionmentioning
confidence: 99%
“…For example, Lotte et al (2018) address the issue of transferring labels of rendered images back to their 3D urban models combining CNN and Structure from Motion (SfM). Similarly, Kelly et al (2017) use images and 3D models of urban scenes in combination with deep learning techniques to derive structured models of city blocks, addressing the automatic fusion of street-level imagery, polygonal meshes and GIS building footprints. This paper aims to provide an alternative to the use of more classic machine learning methods previously proposed for CH classification such as traditional RF (not deep RF as in Zhou et.…”
Section: Deep Learningmentioning
confidence: 99%
“…Since the seminal paper of Parish and Müller [PM01], numerous works have concentrated on city‐scale modeling [VABW09; AFS ∗ 11], road modeling [CEW ∗ 08; GPMG10], parcel modeling [VKW ∗ 12], building modeling [MWH ∗ 06; WWSR03], and facade modeling [BSW13; MZWG07; SHFH11; ZXJ ∗ 13]. Since it is difficult to define and cumbersome to write detailed procedural models of large areas, many works have focused on automatic creation of procedural models, or inverse procedural modeling, often starting from one or more photographs; for example, city‐scale modeling [VGA ∗ 12; KFWM17; ZSBA20], tree creation [SPK ∗ 14; HBDP17], building modeling [NGA ∗ 16; NBA18; ZWF18], and facade generation [ZMA20]. However, few works have focused on automatically generating procedural roofs from images and almost none from a single image.…”
Section: Introductionmentioning
confidence: 99%