2020 IEEE Intelligent Vehicles Symposium (IV) 2020
DOI: 10.1109/iv47402.2020.9304558
|View full text |Cite
|
Sign up to set email alerts
|

Understanding Strengths and Weaknesses of Complementary Sensor Modalities in Early Fusion for Object Detection

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
4

Citation Types

0
4
0

Year Published

2021
2021
2024
2024

Publication Types

Select...
5
1

Relationship

2
4

Authors

Journals

citations
Cited by 10 publications
(4 citation statements)
references
References 6 publications
0
4
0
Order By: Relevance
“…The performance of LiDAR 3D object detectors for autonomous driving has improved significantly in the recent years, thanks mainly to the evolution of deep learning detection architectures, and to the emergence of public labeled LiDAR point cloud datasets such as Waymo [1], NuScenes [2], KITTI [3], and ONCE [4]. The evolution of such detection architectures includes methods that voxelize the point cloud and either employ 3D convolutions [5], or 2D convolutions [6], [7], methods that operate on projections of LiDAR points [8], [9], [10], methods that operate directly on 3D points [11], and more recently, methods that combine voxelization with point-level processing [12]. For a comprehensive chronological overview of 3D object detection algorithms using LiDAR data see [13] .…”
Section: Introductionmentioning
confidence: 99%
See 1 more Smart Citation
“…The performance of LiDAR 3D object detectors for autonomous driving has improved significantly in the recent years, thanks mainly to the evolution of deep learning detection architectures, and to the emergence of public labeled LiDAR point cloud datasets such as Waymo [1], NuScenes [2], KITTI [3], and ONCE [4]. The evolution of such detection architectures includes methods that voxelize the point cloud and either employ 3D convolutions [5], or 2D convolutions [6], [7], methods that operate on projections of LiDAR points [8], [9], [10], methods that operate directly on 3D points [11], and more recently, methods that combine voxelization with point-level processing [12]. For a comprehensive chronological overview of 3D object detection algorithms using LiDAR data see [13] .…”
Section: Introductionmentioning
confidence: 99%
“…Problem. It is well known that 3D point density distribution accross distance ranges of LiDAR point clouds for autonomous driving have a highly non-uniform distribution [24], [10]. This non-uniformity is mainly caused by the fixed scanning pattern of the LiDAR sensor and by its limited beam resolutions, which results in large density discrepancies between objects located at different distance ranges from the sensor.…”
Section: Introductionmentioning
confidence: 99%
“…There exist a number of techniques in the literature to address this problem, including transfer learning and fine-tuning, where the network is first trained on the source dataset, and then further trained (i.e. refined) using the available labeled target frames [2], [3], [4], [5], [6], [7], [8]. On the other hand, there is research in domain adaptation that focuses on reducing the domain shift between two or more domains.…”
Section: Introduction and Prior Workmentioning
confidence: 99%
“…This technique allows both to improve the detection of obstacles in the driving environment via multisensor data fusion [11] and to enhance safety in autonomous vehicles navigation [16]. The main advantage is that data collected from different sensors may contain complementary information [6], and data collected from remote sensors can help in filling blind spots [17]. To achieve data fusion, techniques such as the Kalman Filter [14], Extended Kalman Filter [21], Split Covariance Intersection Filter [18] and others have been used.…”
Section: Introductionmentioning
confidence: 99%