2015 IEEE Conference on Computer Vision and Pattern Recognition (CVPR) 2015
DOI: 10.1109/cvpr.2015.7298631
|View full text |Cite
|
Sign up to set email alerts
|

DynamicFusion: Reconstruction and tracking of non-rigid scenes in real-time

Abstract: Real-time reconstructions of a moving scene with DynamicFusion; both the person and the camera are moving. The initially noisy and incomplete model is progressively denoised and completed over time (left to right).

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
3
1
1

Citation Types

0
651
0
1

Year Published

2016
2016
2023
2023

Publication Types

Select...
5
3
1

Relationship

0
9

Authors

Journals

citations
Cited by 796 publications
(676 citation statements)
references
References 37 publications
0
651
0
1
Order By: Relevance
“…However, these methods cannot capture dynamic scenes. Newcombe et al [40], Dou et al [16] and Innmann et al [27] support changes to the scene by deforming the reconstructed volume. These methods expect accurate object tracking, which can fail under complex or fast movement.…”
Section: Volumetric Methodsmentioning
confidence: 99%
“…However, these methods cannot capture dynamic scenes. Newcombe et al [40], Dou et al [16] and Innmann et al [27] support changes to the scene by deforming the reconstructed volume. These methods expect accurate object tracking, which can fail under complex or fast movement.…”
Section: Volumetric Methodsmentioning
confidence: 99%
“…The most notable in this area is the work of Newcombe et al, in particular Kinect Fusion [23] and Dynamic Fusion [22]. However, since this work focuses on tracking and modelling from strictly monocular RGB video, a more detailed review is omitted here.…”
Section: Related Workmentioning
confidence: 99%
“…However, these approaches assume a reconstruction of the full non-rigid object surface at each time frame and do not easily extend to 4D alignment of partial surface reconstructions or depth maps. The wide-spread availability of low-cost depth sensors has motivated the development of methods for temporal correspondence or alignment and 4D modelling from partial dynamic surface observations [8,9,10,11]. Scene flow techniques [12,13] typically estimate the pairwise surface or volume correspondence between reconstructions at successive frames but do not extend to 4D alignment or correspondence across complete sequences due to drift and failure for rapid and complex motion.…”
Section: Introductionmentioning
confidence: 99%
“…However these methods fail in the case of occlusion, large motions, background clutter, deformation, moving cameras and appearance of new parts of objects. Recent work has introduced approaches, such as DynamicFusion [8], for 4D modelling from depth image sequences integrating temporal observations of non-rigid shape to resolve fine detail. Approaches to 4D modelling from partial surface observations are currently limited to relatively simple isolated objects such as the human face or upper-body and do not handle large non-rigid deformations such as loose clothing.…”
Section: Introductionmentioning
confidence: 99%