2018
DOI: 10.1111/cgf.13316
|View full text |Cite
|
Sign up to set email alerts
|

Temporally Consistent Motion Segmentation From RGB‐D Video

Abstract: Temporally consistent motion segmentation from RGB‐D videos is challenging because of the limitations of current RGB‐D sensors. We formulate segmentation as a motion assignment problem, where a motion is a sequence of rigid transformations through all frames of the input. We capture the quality of each potential assignment by defining an appropriate energy function that accounts for occlusions and a sensor‐specific noise model. To make energy minimization tractable, we work with a discrete set instead of the c… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
3
2

Citation Types

0
8
0

Year Published

2020
2020
2021
2021

Publication Types

Select...
3
2
1

Relationship

0
6

Authors

Journals

citations
Cited by 7 publications
(8 citation statements)
references
References 30 publications
0
8
0
Order By: Relevance
“…Similarly, Bertholet et al . [BIZ18] address motion segmentation from RGB‐D videos by representing motion as a sequence of rigid transformations through all input frames in an energy optimization framework. Vlachos et al .…”
Section: Related Workmentioning
confidence: 99%
See 1 more Smart Citation
“…Similarly, Bertholet et al . [BIZ18] address motion segmentation from RGB‐D videos by representing motion as a sequence of rigid transformations through all input frames in an energy optimization framework. Vlachos et al .…”
Section: Related Workmentioning
confidence: 99%
“…These require exact point correspondences that can only be reliably obtained for small deformations and motions and, hence, are by design not suitable for analysing growth processes with changing topology [XSWL15]. Further limitations of previous work on deformations based on exact point correspondences regarding their applicability on growth processes include the assumption of piecewise rigid motion as used for object tracking [BIZ18], the requirement of a 3D object template that is deformed and fitted to the point clouds in adjacent time steps using respective priors (without enforcing temporal coherence) [ZFG*17], the involvement of a visual hull prior that biases the optimization in the context of mesh‐based approaches [LLV*12] or the need for large databases required by learning‐based methods [WLX*18] that are hard to acquire due to the time‐consuming nature of the scanning process and the growth of the plants themselves.…”
Section: Introductionmentioning
confidence: 99%
“…There is also a set of works related to dynamic scene reconstruction but not focused on voxel-based techniques: 1) Other template/mesh-based deformation approaches [40,21,5]; 2) Methods for learning-based schemes that may handle larger changes [1,12,21,14,39,13]; 3) Methods on point correspondence based interpolation that do not require the prior of a mesh representation and are more flexible with respect to topological changes [23,41,44,2]; 4) Finally, some point distribution based approaches that do not require correspondence search and provide even more flexibility [8,35,17].…”
Section: Related Workmentioning
confidence: 99%
“…Robust solutions now exist for capturing static scenes by fusing raw depth scans across multiple frames to recover from incomplete and noisy measurements [CL96; RHL02; NIH ∗ 11; CBI13; NZIS13; KPR ∗ 15; KDSX15; DNZ ∗ 17]. However, there are only limited options for capturing dynamic scenes (e.g., requiring background initialization [MS14; RA17], using semantic priors [RBA18; XLT ∗ 19a; SS19; HGDG17; XLT ∗ 19b; SS19; MCB ∗ 18], exploiting scene flow information [BIZ18; GKM ∗ 17], or handling deformable objects [NFS15; IZN ∗ 16; DKD ∗ 16; SBCI17]). This is rather surprising since our surroundings are mostly dynamic as objects are moved around in course of our regular interactions, for instance, a person moves a box, table, or chair.…”
Section: Introductionmentioning
confidence: 99%