2020 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR) 2020
DOI: 10.1109/cvpr42600.2020.01140
|View full text |Cite
|
Sign up to set email alerts
|

MotionNet: Joint Perception and Motion Prediction for Autonomous Driving Based on Bird’s Eye View Maps

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
3
1
1

Citation Types

0
135
0

Year Published

2021
2021
2024
2024

Publication Types

Select...
5
1
1
1

Relationship

0
8

Authors

Journals

citations
Cited by 135 publications
(135 citation statements)
references
References 40 publications
0
135
0
Order By: Relevance
“…Such complete features enable the backbone network to make predictions with high confidence, allowing the outcomes to be safely used in later pipelines like planning and control. Conducted experiments prove the superiority of LiCaNet over our previous work [12], MotionNet and thus other state-of-the-art models reported in [14]. LiCaNet achieves outstanding accuracy in real-time for perception and motion prediction, especially for small and distant objects.…”
Section: Introductionmentioning
confidence: 52%
See 2 more Smart Citations
“…Such complete features enable the backbone network to make predictions with high confidence, allowing the outcomes to be safely used in later pipelines like planning and control. Conducted experiments prove the superiority of LiCaNet over our previous work [12], MotionNet and thus other state-of-the-art models reported in [14]. LiCaNet achieves outstanding accuracy in real-time for perception and motion prediction, especially for small and distant objects.…”
Section: Introductionmentioning
confidence: 52%
“…As LIDARs are the most common sensor used in autonomous driving, we discuss different ways available in the literature to represent LIDAR data. Point clouds can be processed in point-based form [9], 3D voxelization [10], [11], bird's-eye view (BEV) [12], [13], [3], [14], and in range view (RV) [12], [15], [16], [3], [17], [18]. Processing 3D LIDAR point cloud in its raw form is straightforward and requires no transformation.…”
Section: Introductionmentioning
confidence: 99%
See 1 more Smart Citation
“…Models and Dataset. For our evaluation, we use the models detailed in Section 3.3, with PointPillars [8] and SECOND [14] used for 3D Object Detection and MotionNet [12] for the Object-motion Prediction. For AV driving scenes, we use the mini dataset from nuScenes [1], which contains 3D point-clouds of 10 driving scenes.…”
Section: Experiments and Resultsmentioning
confidence: 99%
“…[16] proposes a method called LiDAR-flow, which provides robust estimation of dense scene flows by fusing sparse LiDAR with stereo images. [17] and [20] fusing temporal information to detect dynamic objects, they use multi frame point cloud as input to regress the motion behavior of objects on the aerial view through the network, the advantage of this kind of method is to detect all moving objects in the field of vision of LiDAR, including objects not seen in the training set, which is of great significance to the safety of robotics automatic driving.…”
Section: B Lidar Dynamic Object Detectionmentioning
confidence: 99%