2017 International Conference on 3D Vision (3DV) 2017
DOI: 10.1109/3dv.2017.00039
|View full text |Cite
|
Sign up to set email alerts
|

Multiframe Scene Flow with Piecewise Rigid Motion

Abstract: We introduce a novel multiframe scene flow approach that jointly optimizes the consistency of the patch appearances and their local rigid motions from RGB-D image sequences. In contrast to the competing methods, we take advantage of an oversegmentation of the reference frame and robust optimization techniques. We formulate scene flow recovery as a global non-linear least squares problem which is iteratively solved by a damped Gauss-Newton approach. As a result, we obtain a qualitatively new level of accuracy i… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

0
22
0

Year Published

2018
2018
2023
2023

Publication Types

Select...
4
2
1

Relationship

1
6

Authors

Journals

citations
Cited by 24 publications
(22 citation statements)
references
References 29 publications
0
22
0
Order By: Relevance
“…Ren et al [15] predicted scene flow using iteratively semantic segmentation in stereoscopic vision with the assumption of foreground objects rigidly moving and static background. Also, in other studies, the assumption of local rigid motion and the flexible motion of each point are combined to obtain more detailed motion [18][19][20][21]. Vogel et al [22] proposed a piecewise rigid scene flow and super-pixels are used to segment the scene and constrain scene flow estimation.…”
Section: Related Workmentioning
confidence: 99%
“…Ren et al [15] predicted scene flow using iteratively semantic segmentation in stereoscopic vision with the assumption of foreground objects rigidly moving and static background. Also, in other studies, the assumption of local rigid motion and the flexible motion of each point are combined to obtain more detailed motion [18][19][20][21]. Vogel et al [22] proposed a piecewise rigid scene flow and super-pixels are used to segment the scene and constrain scene flow estimation.…”
Section: Related Workmentioning
confidence: 99%
“…Its extension to a moving camera case needs to disambiguate the camera ego-motion from object scene motions in 3D. Due to the intrinsic complexity of such task, existing methods often address it with known camera parameters [1,37] or assume scene motions are piecewise rigid [21,23,10,41,42,44]. When depth is known, scene flow can be more accurately estimated.…”
Section: Related Workmentioning
confidence: 99%
“…Di↵erent from the color image, the depth map is rendered from the 3D mesh, which is less noisy and more complete than raw depth. Since the camera movement during the 3D acquisition is small between frames, we sub-sample frames at intervals of [1,2,5,10,20] to create larger motions. We employ a multi-pass rendering approach to generate depth, optical flow and rigidity mask as our ground truth.…”
Section: Refresh Datasetmentioning
confidence: 99%
“…In the case of RGBD sequences, early works attempt to estimate the 3D motion field (scene flow) between consecutive frames [Christoph et al 2015;HornÃącek et al 2014;Quiroga et al 2014;Vogel et al 2014]. To recover parts, super segments can be extracted and grouped according to their estimated rigid transformations from the motion field [Golyani et al 2017]. Alternatively, patches or points lifted from the RGBD frames can be clustered into segments based on their overall flow similarity across frames using Expectation-Maximization or coordinate descent formulations [Jaimez et al 2015;Stückler and Behnke 2015].…”
Section: Prior Workmentioning
confidence: 99%