The platform will undergo maintenance on Sep 14 at about 7:45 AM EST and will be unavailable for approximately 2 hours.
2017 IEEE Conference on Computer Vision and Pattern Recognition (CVPR) 2017
DOI: 10.1109/cvpr.2017.268
|View full text |Cite
|
Sign up to set email alerts
|

Scalable Surface Reconstruction from Point Clouds with Extreme Scale and Density Diversity

Abstract: In this paper we present a scalable approach for robustly computing a 3D surface mesh from multi-scale multi-view stereo point clouds that can handle extreme jumps of point density (in our experiments three orders of magnitude). The backbone of our approach is a combination of octree data partitioning, local Delaunay tetrahedralization and graph cut optimization. Graph cut optimization is used twice, once to extract surface hypotheses from local Delaunay tetrahedralizations and once to merge overlapping surfac… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
4
1

Citation Types

0
20
0
1

Year Published

2017
2017
2023
2023

Publication Types

Select...
5
3
1

Relationship

1
8

Authors

Journals

citations
Cited by 26 publications
(21 citation statements)
references
References 29 publications
0
20
0
1
Order By: Relevance
“…Malgré l'information essentielle contenue dans la texture, on remarque que de nombreux travaux de reconstruction 3D s'intéressent peu au problème et se limitent à assigner une intensité par primitive du modèle (sommet, facette, voxel, point 3D, surfel). Très souvent, pour représenter un modèle texturé, les travaux utilisent un maillage polygonal et se contentent d'assigner alors une intensité par facette ou sommet [Steinbruecker et al, 2014, Mostegel et al, 2017. Cette façon de procéder à le désavantage majeur de lier la résolution de la texture à celle de la géométrie du modèle reconstruit.…”
Section: Bilanunclassified
“…Malgré l'information essentielle contenue dans la texture, on remarque que de nombreux travaux de reconstruction 3D s'intéressent peu au problème et se limitent à assigner une intensité par primitive du modèle (sommet, facette, voxel, point 3D, surfel). Très souvent, pour représenter un modèle texturé, les travaux utilisent un maillage polygonal et se contentent d'assigner alors une intensité par facette ou sommet [Steinbruecker et al, 2014, Mostegel et al, 2017. Cette façon de procéder à le désavantage majeur de lier la résolution de la texture à celle de la géométrie du modèle reconstruit.…”
Section: Bilanunclassified
“…Unfortunately, existing algorithms often decouple the detection of local primitives from the construction of global structures. Continuing on our examples, object contouring methods typically detect line segments along image discontinuities before assembling them to form polygons [1], [2], and multiview stereo reconstruction algorithms extract 3D points by feature matching before interpolating them with a surface mesh [3], [4]. While this two-step approach reduces computational burden, the quality of the resulting structures depends heavily on the local decisions taken during primitive detection.…”
Section: Introductionmentioning
confidence: 99%
“…The mobility and maneuverability of UAVs to freely move in three dimensions and simultaneously capture close-up images of an object with arbitrary viewing angles allow to generate high-resolution and photo-realistic 3D models with high accuracy by processing a series of overlapping images with current state-of-the-art structure from motion (SfM) and multi-view stereo (MVS) pipelines, such as Pix4D [1], Bundler [2], or Colmap [3]. These models are of high interest in various fields, such as the use of digitized building models for 3D city modeling [4], object inspection [5], or cultural heritage documentation [6]. However, the quality of resulting 3D models strongly relies on flight plans that satisfy the requirements of an image-based 3D modeling process which include the acquisition of multiple overlapping images, sufficient baselines between the camera viewpoints and the prevention of optical occlusions from surrounding obstacles.…”
Section: Introductionmentioning
confidence: 99%