2023
DOI: 10.48550/arxiv.2303.04302
|View full text |Cite
Preprint
|
Sign up to set email alerts
|

Camera-Radar Perception for Autonomous Vehicles and ADAS: Concepts, Datasets and Metrics

Abstract: One of the main paths towards the reduction of traffic accidents is the increase in vehicle safety through driver assistance systems or even systems with a complete level of autonomy. In these types of systems, tasks such as obstacle detection and segmentation, especially the Deep Learning-based ones, play a fundamental role in scene understanding for correct and safe navigation. Besides that, the wide variety of sensors in vehicles nowadays provides a rich set of alternatives for improvement in the robustness… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
3
1
1

Citation Types

0
7
0

Year Published

2023
2023
2024
2024

Publication Types

Select...
4

Relationship

0
4

Authors

Journals

citations
Cited by 4 publications
(7 citation statements)
references
References 71 publications
0
7
0
Order By: Relevance
“…These environments consist of rectangular paper boxes labeled as 1 and 2 , a sphere labeled as 3 , another rectangular paper box labeled as 4 , and a conical barrier with base labeled as 7 ; the two sensors obtained point clouds information with varying depth distances within their sensing range. For the sphere labeled as 6 and the cylindrical barrier labeled as 8 , as well as the frame-type obstacle environment in Figure 10b which included benches, chairs and guardrails, the 2D LiDAR either failed to scan or captured minimal feature information. The PF method utilized the minimum distance principle to correctly extract the maximum contour information of the obstacle from the perspective of the LiDAR and camera detection after removing the invalid point cloud.…”
Section: Slam Map Construction Based On Pf Methodsmentioning
confidence: 99%
See 1 more Smart Citation
“…These environments consist of rectangular paper boxes labeled as 1 and 2 , a sphere labeled as 3 , another rectangular paper box labeled as 4 , and a conical barrier with base labeled as 7 ; the two sensors obtained point clouds information with varying depth distances within their sensing range. For the sphere labeled as 6 and the cylindrical barrier labeled as 8 , as well as the frame-type obstacle environment in Figure 10b which included benches, chairs and guardrails, the 2D LiDAR either failed to scan or captured minimal feature information. The PF method utilized the minimum distance principle to correctly extract the maximum contour information of the obstacle from the perspective of the LiDAR and camera detection after removing the invalid point cloud.…”
Section: Slam Map Construction Based On Pf Methodsmentioning
confidence: 99%
“…Barbosa [8] fused 2D LiDAR and camera data using a convolutional neural network for segmentation tasks, and improved the obstacle detection accuracy for driving vehicles under adverse weather and strong lighting conditions. Wang [9] proposed an obstacle detection method based on machine learning and improved VIDAR.…”
Section: Introductionmentioning
confidence: 99%
“…The other researcher enhanced radar features through temporal accumulation and sent them to the spatiotemporal encoder for radar feature extraction, while obtaining multi-scale image 2D features that adapt to various spatial scales through the image backbone and Neck model. The designed initial map transformer was then used to convert image features into BEV [3].…”
Section: Related Workmentioning
confidence: 99%
“…Contemporary multi-sensor fusion encompasses various forms, such as the fusion of LiDAR, cameras and inertial measurement units [1], the fusion of millimetre wave radar and cameras [2], [3], [4], and the fusion of automotive chassis…”
Section: Introductionmentioning
confidence: 99%
“…Barbosa and Osorio, in [21], study radar-based perception and radar-camera fusion, as applied to the navigation of an autonomous vehicle moving in unfavorable lighting and weather conditions.…”
Section: Introductionmentioning
confidence: 99%