2022 IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops (CVPRW) 2022
DOI: 10.1109/cvprw56347.2022.00488
|View full text |Cite
|
Sign up to set email alerts
|

PointMotionNet: Point-Wise Motion Learning for Large-Scale LiDAR Point Clouds Sequences

Abstract: We propose a point-based spatiotemporal pyramid architecture, called PointMotionNet, to learn motion information from a sequence of large-scale 3D LiDAR point clouds. A core component of PointMotionNet is a novel technique for point-based spatiotemporal convolution, which finds the point correspondences across time by leveraging a timeinvariant spatial neighboring space and extracts spatiotemporal features. To validate PointMotionNet, we consider two motion-related tasks: point-based motion prediction and mult… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
3
1
1

Citation Types

0
11
0

Year Published

2022
2022
2024
2024

Publication Types

Select...
5
2

Relationship

0
7

Authors

Journals

citations
Cited by 9 publications
(11 citation statements)
references
References 35 publications
(64 reference statements)
0
11
0
Order By: Relevance
“…46 Wang et al presented PointMotionNet, a spatiotemporal convolutional neural network that extracts spatio-temporal features, to distinguish the moving and static objects. 47 Pruim et al built upon the popular U-Net architecture by replacing 2D convolutions with 3D convolutions to detect maritime targets in video. The 3D U-Net spatio-temporal network outperforms a 2D U-Net that performs per-frame segmentation.…”
Section: Related Workmentioning
confidence: 99%
“…46 Wang et al presented PointMotionNet, a spatiotemporal convolutional neural network that extracts spatio-temporal features, to distinguish the moving and static objects. 47 Pruim et al built upon the popular U-Net architecture by replacing 2D convolutions with 3D convolutions to detect maritime targets in video. The 3D U-Net spatio-temporal network outperforms a 2D U-Net that performs per-frame segmentation.…”
Section: Related Workmentioning
confidence: 99%
“…Wang et al. proposed a point‐wise motion learning network (PointMotionNet) [27], which learns motion information from a sequence of large‐scale 3D LiDAR point clouds. Peri et al.…”
Section: Related Workmentioning
confidence: 99%
“…Salzmann et al [26] incorporated semantic maps and camera images for motion prediction. Wang et al proposed a point-wise motion learning network (PointMotionNet) [27], which learns motion information from a sequence of large-scale 3D LiDAR point clouds. Peri et al proposed a network that forecasts trajectories from LiDAR via future object detection [28].…”
Section: F I G U R E 1 a Comparison Of Long Short-term Memory (Lstm) Rawmentioning
confidence: 99%
“…However, by processing the neighbourhoods at multiple scales, the network can capture temporal correlations that would otherwise be hidden. This hierarchical learning strategy has proved to be highly successful at learning from point cloud sequences and has been widely adopted throughout the literature [8,10,15,18,19,24,37,45]. In PSTNet [8] a hierarchical architecture is used for the action classification of point cloud sequences.…”
Section: Related Workmentioning
confidence: 99%