2022
DOI: 10.20517/ir.2021.20
|View full text |Cite
|
Sign up to set email alerts
|

Deep learning for LiDAR-only and LiDAR-fusion 3D perception: a survey

Abstract: The perception system for robotics and autonomous cars relies on the collaboration among multiple types of sensors to understand the surrounding environment. LiDAR has shown great potential to provide accurate environmental information, and thus deep learning on LiDAR point cloud draws increasing attention. However, LiDAR is unable to handle severe weather. The sensor fusion between LiDAR and other sensors is an emerging topic due to its supplementary property compared to a single LiDAR. Challenges exist in de… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1

Citation Types

0
1
0

Year Published

2022
2022
2024
2024

Publication Types

Select...
4
1
1

Relationship

0
6

Authors

Journals

citations
Cited by 10 publications
(1 citation statement)
references
References 105 publications
(238 reference statements)
0
1
0
Order By: Relevance
“…The fusion of the two types of data can provide more contextualized information, but everything will depend on the processing capacity available and what is required to be analyzed. The authors in [12] present a comprehensive survey of deep learning on LiDAR-only and LiDAR-fusion 3D perception tasks. This paper gives a review according to four key tasks in the field of LiDAR-based perception: object classification, object detection, object tracking, and segmentation.…”
Section: State Of the Artmentioning
confidence: 99%
“…The fusion of the two types of data can provide more contextualized information, but everything will depend on the processing capacity available and what is required to be analyzed. The authors in [12] present a comprehensive survey of deep learning on LiDAR-only and LiDAR-fusion 3D perception tasks. This paper gives a review according to four key tasks in the field of LiDAR-based perception: object classification, object detection, object tracking, and segmentation.…”
Section: State Of the Artmentioning
confidence: 99%