2019 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR) 2019
DOI: 10.1109/cvpr.2019.00062
|View full text |Cite
|
Sign up to set email alerts
|

FlowNet3D: Learning Scene Flow in 3D Point Clouds

Abstract: Many applications in robotics and human-computer interaction can benefit from understanding 3D motion of points in a dynamic environment, widely noted as scene flow. While most previous methods focus on stereo and RGB-D images as input, few try to estimate scene flow directly from point clouds. In this work, we propose a novel deep neural network named FlowNet3D that learns scene flow from point clouds in an end-to-end fashion. Our network simultaneously learns deep hierarchical features of point clouds and fl… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
4
1

Citation Types

0
708
0

Year Published

2019
2019
2023
2023

Publication Types

Select...
4
3
2

Relationship

1
8

Authors

Journals

citations
Cited by 455 publications
(708 citation statements)
references
References 36 publications
0
708
0
Order By: Relevance
“…Held et al utilize probabilistic approaches to point cloud segmentation and tracking [21,23,22]. Recent work demonstrates how 3D instance segmentation and 3D motion (in the form of 3D scene flow, or per-point velocity vectors) can be estimated directly on point cloud input with deep networks [59,38]. Our dataset enables 3D tracking with sensor fusion in a 360 • frame.…”
Section: Related Workmentioning
confidence: 99%
“…Held et al utilize probabilistic approaches to point cloud segmentation and tracking [21,23,22]. Recent work demonstrates how 3D instance segmentation and 3D motion (in the form of 3D scene flow, or per-point velocity vectors) can be estimated directly on point cloud input with deep networks [59,38]. Our dataset enables 3D tracking with sensor fusion in a 360 • frame.…”
Section: Related Workmentioning
confidence: 99%
“…However, its novel operator is defined on each point and pooling is the only proposed way for aggregating information. FlowNet3D [25] builds on Point-Net++ [35] and uses a flow embedding layer to mix two point clouds, so it shares the aforementioned drawbacks of [35]. Work on scene flow estimation with other input formats (stereo [19], RGBD [20], light field [27]) is less related, and we refer to Yan and Xiang [45] for a survey.…”
Section: Related Workmentioning
confidence: 99%
“…The difference is that the neural network processes point pairs instead of individual points. FlowNet3D [14] lets the shared neural network take mixed types of modalities, i.e. geometric features and displacement, as inputs to learn scene flow between two point clouds.…”
Section: Related Workmentioning
confidence: 99%