2022 IEEE/CVF Winter Conference on Applications of Computer Vision (WACV) 2022
DOI: 10.1109/wacv51458.2022.00335
|View full text |Cite
|
Sign up to set email alerts
|

Multi-View Fusion of Sensor Data for Improved Perception and Prediction in Autonomous Driving

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
2
1

Citation Types

0
14
0
1

Year Published

2022
2022
2024
2024

Publication Types

Select...
3
2
2

Relationship

0
7

Authors

Journals

citations
Cited by 38 publications
(15 citation statements)
references
References 25 publications
0
14
0
1
Order By: Relevance
“…Despite the fact that multi-modal sensor configurations are widely seen on self-driving vehicles [38,37,55,22,4,12], research on multi-modal sensor attacks is still very limited. Several preliminary works show the possibility of attacking multi-sensor fusion networks [61,7,72].…”
Section: Related Workmentioning
confidence: 99%
See 1 more Smart Citation
“…Despite the fact that multi-modal sensor configurations are widely seen on self-driving vehicles [38,37,55,22,4,12], research on multi-modal sensor attacks is still very limited. Several preliminary works show the possibility of attacking multi-sensor fusion networks [61,7,72].…”
Section: Related Workmentioning
confidence: 99%
“…We focus on attacking the LiDAR-image object detector MMF [37], a state-of-the-art multi-sensor network architecture employed in modern self-driving systems. We believe our study is general enough as the multi-sensor fusion module is a common building block for other related works [22,4,12,69]. Specifically, we require the adversarial multi-sensory attack to be (1) input-agnostic so that it can be applied in different environments, (2) geometricallyconsistent across image and LiDAR input modalities, and (3) fully-automatic for implementation at large-scale.…”
Section: Multi-sensor Adversarial Learningmentioning
confidence: 99%
“…Sensor Fusion Methods for Object Detection and Motion Forecasting: The majority of the sensor fusion works consider perception tasks, e.g. object detection [22,12,66,7,44,31,34,61,33,37] and motion forecasting [36,5,35,63,6,19,38]. They operate on multi-view LiDAR, e.g.…”
Section: Related Workmentioning
confidence: 99%
“…Prior works in the field of sensor fusion have mostly focused on the perception aspect of driving, e.g. 2D and 3D object detection [22,12,66,9,44,31,34,61,33,37], motion forecasting [22,36,5,35,63,6,19,38,32,9], and depth estimation [24,60,61,33]. These methods focus on learning a state representation that captures the geometric and semantic information of the 3D scene.…”
Section: Introductionmentioning
confidence: 99%
See 1 more Smart Citation