2021 IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops (CVPRW) 2021
DOI: 10.1109/cvprw53098.2021.00321
|View full text |Cite
|
Sign up to set email alerts
|

MVFuseNet: Improving End-to-End Object Detection and Motion Forecasting through Multi-View Fusion of LiDAR Data

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
3
1
1

Citation Types

0
14
0

Year Published

2022
2022
2024
2024

Publication Types

Select...
3
3
2

Relationship

0
8

Authors

Journals

citations
Cited by 23 publications
(15 citation statements)
references
References 25 publications
0
14
0
Order By: Relevance
“…Forecasting: Most sensor fusion works consider perception tasks, e.g. object detection [14]- [16], [18]- [23], [47]- [60] and motion forecasting [24]- [30], [49], [61], [62]. They operate on multi-view LiDAR, e.g.…”
Section: Sensor Fusion Methods For Object Detection and Motionmentioning
confidence: 99%
“…Forecasting: Most sensor fusion works consider perception tasks, e.g. object detection [14]- [16], [18]- [23], [47]- [60] and motion forecasting [24]- [30], [49], [61], [62]. They operate on multi-view LiDAR, e.g.…”
Section: Sensor Fusion Methods For Object Detection and Motionmentioning
confidence: 99%
“…An enhancement model to MultiXNet [64] is proposed by Fadadu et al [38]. The model in [38] Lastly, MVFuseNet [88] implements perception and motion forecasting by fusing sequential LIDAR data in both BEV and RV forms, in addition to HD map features. Unlike [38], MVFuseNet performs spatio-temporal fusion of both BEV and RV features for multiple frames.…”
Section: Predictions Using Fusion Of Lidar and Camera Sensorsmentioning
confidence: 99%
“…MVFuseNet reports improved performance over [38] in perception and motion prediction across all object categories. However, again, 3D object-level predictions were computed in both [38,88]. It is evident that no work has yet been conducted that investigates pixel-wise joint perception and motion prediction using multi-modal fusion, which is essential for small and distant objects as they provide fine-grained, pixel level precision.…”
Section: Predictions Using Fusion Of Lidar and Camera Sensorsmentioning
confidence: 99%
See 1 more Smart Citation
“…A line of works [4,18] realize multi-view fusion either by aggregating features to refine proposals or fusing features in the region constrained by the spatial projection. [7,17] fuse the ROI features from point cloud and camera image for proposals refinement.…”
Section: Multi-view 3d Detectionmentioning
confidence: 99%