2020
DOI: 10.3390/electronics9030451
|View full text |Cite
|
Sign up to set email alerts
|

Real-Time Vehicle Detection Framework Based on the Fusion of LiDAR and Camera

Abstract: Vehicle detection is essential for driverless systems. However, the current single sensor detection mode is no longer sufficient in complex and changing traffic environments. Therefore, this paper combines camera and light detection and ranging (LiDAR) to build a vehicle-detection framework that has the characteristics of multi adaptability, high real-time capacity, and robustness. First, a multi-adaptive high-precision depth-completion method was proposed to convert the 2D LiDAR sparse depth map into a dense … Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

0
14
0

Year Published

2020
2020
2024
2024

Publication Types

Select...
6
1
1

Relationship

0
8

Authors

Journals

citations
Cited by 38 publications
(18 citation statements)
references
References 35 publications
(27 reference statements)
0
14
0
Order By: Relevance
“…We compared the results of facet detection IoU from the adapted algorithm [ 20 ] with our proposed approach on the following classes: car, cyclist, misc, pedestrian, truck, and van. For the person sitting class, the clustering algorithm takes other objects into account, e.g., a table, and evaluation is not correct.…”
Section: Evaluation and Resultsmentioning
confidence: 99%
See 1 more Smart Citation
“…We compared the results of facet detection IoU from the adapted algorithm [ 20 ] with our proposed approach on the following classes: car, cyclist, misc, pedestrian, truck, and van. For the person sitting class, the clustering algorithm takes other objects into account, e.g., a table, and evaluation is not correct.…”
Section: Evaluation and Resultsmentioning
confidence: 99%
“…The authors of [ 20 ] propose a real-time framework for object detection that combines camera and LiDAR sensors. The point cloud from LiDAR is converted into a dense depth map, which is aligned to the camera image.…”
Section: Related Workmentioning
confidence: 99%
“…The detections from the independent networks are then fused using statistical methods or a smaller network. The merit of late fusion approaches is that in case of the failure of one sensor, the detections from another sensor can still be used, bearing a reduction in the accuracy [13]. Nevertheless, sometimes it can be difficult to perform a certain task efficiently in a redundant way due to the limitations of the sensor modalities, for example, 3D object detection using a monocular camera.…”
Section: B Late Fusion Methodsmentioning
confidence: 99%
“…To obtain more data for improved object recognition, autonomous vehicles attempt to apply high resolution cameras or multi-channel LiDAR sensors. In addition, they attempt to combine several cameras or multi-channel LiDAR sensors [ 2 , 3 ]. Thus, massive amounts of data are generated and transmitted from the vehicle devices.…”
Section: Background and Related Workmentioning
confidence: 99%
“…In particular, massive data is generated by cameras and light detection and ranging (LiDAR) sensors for autonomous driving and by multimedia for infotainment services. Therefore, a high-speed IVN backbone is needed for this kind of data transmission [ 1 , 2 , 3 ].…”
Section: Introductionmentioning
confidence: 99%