2015
DOI: 10.1109/tvcg.2015.2459891
|View full text |Cite
|
Sign up to set email alerts
|

Very High Frame Rate Volumetric Integration of Depth Images on Mobile Devices

Abstract: Volumetric methods provide efficient, flexible and simple ways of integrating multiple depth images into a full 3D model. They provide dense and photorealistic 3D reconstructions, and parallelised implementations on GPUs achieve real-time performance on modern graphics hardware. To run such methods on mobile devices, providing users with freedom of movement and instantaneous reconstruction feedback, remains challenging however. In this paper we present a range of modifications to existing volumetric integratio… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

0
246
0

Year Published

2016
2016
2021
2021

Publication Types

Select...
4
4

Relationship

0
8

Authors

Journals

citations
Cited by 266 publications
(246 citation statements)
references
References 21 publications
0
246
0
Order By: Relevance
“…It should also be noted that pose tracking 3D model generation from depth camera frames is an active area of research and we expect future depth cameras to exhibit improved performance for both aspects of operation. 37,38 Fabrication errors in the compensator would also be expected to improve with additional advancements in molding technique. The 3D-printed compensator shapes were evaluated to be within 0.1 mm of the desired thickness, indicating that errors were introduced in the molding process.…”
Section: Discussionmentioning
confidence: 99%
“…It should also be noted that pose tracking 3D model generation from depth camera frames is an active area of research and we expect future depth cameras to exhibit improved performance for both aspects of operation. 37,38 Fabrication errors in the compensator would also be expected to improve with additional advancements in molding technique. The 3D-printed compensator shapes were evaluated to be within 0.1 mm of the desired thickness, indicating that errors were introduced in the molding process.…”
Section: Discussionmentioning
confidence: 99%
“…For the depth map fusion we use the approach presented by (Kähler et al, 2015). This approach is highly optimized for real time processing on a GPU.…”
Section: Depth Map Fusion With Semantic Filteringmentioning
confidence: 99%
“…We exploit the hashing of (Nießner et al, 2013) to be unconstrained in size of the scene. To be able to use our aligned depth maps, we extended the framework of (Kähler et al, 2015) to take camera poses as input for the depth maps positioning and not to perform any tracking.…”
Section: Depth Map Fusion With Semantic Filteringmentioning
confidence: 99%
See 1 more Smart Citation
“…One of the main scientific and technological achievements at the start of this trend is undoubtedly Kinect Fusion, by Newcombe et al [1], shortly followed by several extensions [2] [3] and alternative formulations of the original problem and solution [4] [5]. At the core of these algorithms is an elegant method for volumetric integration of depth information into a truncated signed distance field (TSDF).…”
Section: Introductionmentioning
confidence: 99%