2022
DOI: 10.3390/s22072453
|View full text |Cite
|
Sign up to set email alerts
|

Unifying Obstacle Detection, Recognition, and Fusion Based on the Polarization Color Stereo Camera and LiDAR for the ADAS

Abstract: The perception module plays an important role in vehicles equipped with advanced driver-assistance systems (ADAS). This paper presents a multi-sensor data fusion system based on the polarization color stereo camera and the forward-looking light detection and ranging (LiDAR), which achieves the multiple target detection, recognition, and data fusion. The You Only Look Once v4 (YOLOv4) network is utilized to achieve object detection and recognition on the color images. The depth images are obtained from the rect… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2

Citation Types

0
1
0

Year Published

2023
2023
2024
2024

Publication Types

Select...
5
2

Relationship

0
7

Authors

Journals

citations
Cited by 11 publications
(2 citation statements)
references
References 37 publications
0
1
0
Order By: Relevance
“…Deep learning-related technologies are increasingly integrated into people’s daily life, and object detection algorithms ( Qi et al, 2021 ; Liu et al, 2022a , b ; Xu et al, 2022 ), as a crucial component of the autonomous driving perception layer, can create a solid foundation for behavioral judgments during autonomous driving. Although object detection algorithms based on 2D images ( Bochkovskiy et al, 2020 ; Bai et al, 2022 ; Cheon et al, 2022 ; Gromada et al, 2022 ; Long et al, 2022 ; Otgonbold et al, 2022 ; Wahab et al, 2022 ; Wang et al, 2022 ) have had a lot of success at this stage, single-view images cannot completely reflect the position pose, and motion orientation of objects in 3D space due to the lack of depth information in 2D images. Consequently, in the field of autonomous driving, the focus of object detection research has increasingly switched from 2D image detection to 3D image detection and point cloud detection.…”
Section: Introductionmentioning
confidence: 99%
“…Deep learning-related technologies are increasingly integrated into people’s daily life, and object detection algorithms ( Qi et al, 2021 ; Liu et al, 2022a , b ; Xu et al, 2022 ), as a crucial component of the autonomous driving perception layer, can create a solid foundation for behavioral judgments during autonomous driving. Although object detection algorithms based on 2D images ( Bochkovskiy et al, 2020 ; Bai et al, 2022 ; Cheon et al, 2022 ; Gromada et al, 2022 ; Long et al, 2022 ; Otgonbold et al, 2022 ; Wahab et al, 2022 ; Wang et al, 2022 ) have had a lot of success at this stage, single-view images cannot completely reflect the position pose, and motion orientation of objects in 3D space due to the lack of depth information in 2D images. Consequently, in the field of autonomous driving, the focus of object detection research has increasingly switched from 2D image detection to 3D image detection and point cloud detection.…”
Section: Introductionmentioning
confidence: 99%
“…Due to an integrated IMU (Inertial Measurement Unit), it is possible to perform online lidar odometry [5] without any additional engineering effort. Thus, recent advances in mobile mapping algorithms show great improvement in lidar 3D mapping even looking from the perspective of other applications such ADAS (Advanced Driver Assistance Systems) [6] (related to future autonomous driving).…”
Section: Introductionmentioning
confidence: 99%