2015
DOI: 10.1007/978-3-319-24947-6_19
|View full text |Cite
|
Sign up to set email alerts
|

Line3D: Efficient 3D Scene Abstraction for the Built Environment

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1
1

Citation Types

0
32
0

Year Published

2017
2017
2024
2024

Publication Types

Select...
5
2

Relationship

0
7

Authors

Journals

citations
Cited by 33 publications
(32 citation statements)
references
References 26 publications
0
32
0
Order By: Relevance
“…For instance in DTU-006, we are able to reconstruct edges of any inclination, recovering all the structural elements in the scenes; in the DTU-098 dataset, the high reflectivity of the metallic cans causes the SfM pipeline to fail in reconstructing a considerable portion of the surfaces, while the same areas are fully recovered by the proposed system. Since the enhancement of 3D Delaunay-based mesh reconstructions is one of the most relevant reasons why we estimate 3D edges, we also compared the two 3D meshes reconstructed through the algorithm described in [19] from the OpenMVG points, from Line3D++ [8] and from the points sampled from the 3D edges. As suggested in [21] we compare depth map generated by the reconstructed and the ground truth meshes from the central camera of the sequence of each dataset.…”
Section: Resultsmentioning
confidence: 99%
See 1 more Smart Citation
“…For instance in DTU-006, we are able to reconstruct edges of any inclination, recovering all the structural elements in the scenes; in the DTU-098 dataset, the high reflectivity of the metallic cans causes the SfM pipeline to fail in reconstructing a considerable portion of the surfaces, while the same areas are fully recovered by the proposed system. Since the enhancement of 3D Delaunay-based mesh reconstructions is one of the most relevant reasons why we estimate 3D edges, we also compared the two 3D meshes reconstructed through the algorithm described in [19] from the OpenMVG points, from Line3D++ [8] and from the points sampled from the 3D edges. As suggested in [21] we compare depth map generated by the reconstructed and the ground truth meshes from the central camera of the sequence of each dataset.…”
Section: Resultsmentioning
confidence: 99%
“…In literature, edge reconstruction is often limited to the reconstruction of line-segments, i.e., straight edges, and existing approaches rely their estimation on video sequences. Only Hofer et al [8] with Line3D++ proposed an approach to estimate 3D segments in a Multi-View Stereo scenario. This method, however, is not able to recover curved edges.…”
Section: Introductionmentioning
confidence: 99%
“…Finally, we ran our plane detection and surface reconstruction, using a complete plane arrangement as baseline (1) image sample, (2) segments from Line3D++ [19,21], (3) our reconstruction, point-based reconstructions with Colmap [47] then (4) Poisson [27], (5) Delaunay [31], (6) Chauve et al [10], (7) Polyfit [38].…”
Section: Methodsmentioning
confidence: 99%
“…It measures the number of times a 3D segment l is to be considered an outlier as it should not be visible from a given viewpoint v, weighted by the length of the visible parts l v,f of l on the offending faces f (possibly fragmented due to occlusions). Contrary to E prim (x), all segments are considered in E vis (x), not just segments supporting a plane: Figure 4: HouseInterior: (1) an image of the dataset, (2) points densely sampled on surface, (3) reconstruction with [10], (4) failed reconstruction with [10] from points sampled on lines, (5) 3D lines detected with Line3D++ [19], with noise and outliers, (6) our reconstruction, which is nonetheless superior, (7) histograms of distance errors w.r.t. ground truth (m).…”
Section: Surface Reconstructionmentioning
confidence: 99%
“…If we are familiar with the target object in advance, it can be identified using image matching, but, recognition is challenging in the absence of such prior knowledge. As it is unnecessary to reconstruct irrelevant regions such as sky and ground, the target scene objects can be simply represented as planes [17] or lines [18].…”
Section: Line-based Image Segmentationmentioning
confidence: 99%