2017 13th IEEE International Conference on Intelligent Computer Communication and Processing (ICCP) 2017
DOI: 10.1109/iccp.2017.8117023
|View full text |Cite
|
Sign up to set email alerts
|

Real-time object detection using a sparse 4-layer LIDAR

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
2
1

Citation Types

0
15
0

Year Published

2019
2019
2022
2022

Publication Types

Select...
6
1

Relationship

1
6

Authors

Journals

citations
Cited by 15 publications
(15 citation statements)
references
References 11 publications
0
15
0
Order By: Relevance
“…To tackle the sensor-fusion computational problem, this [22] proposed an earlyfusion method to fuse both camera and LiDAR with only one backbone, attaining a good balance between accuracy and efficiency. Other methods such as [23] solve the problem of data correction and temporal point cloud fusion for object detection using only 4-layer LiDAR. However, the discussed state-of-the-art approaches' performance depends on the large-scale training dataset and ground truth labels.…”
Section: D Object Detectionmentioning
confidence: 99%
“…To tackle the sensor-fusion computational problem, this [22] proposed an earlyfusion method to fuse both camera and LiDAR with only one backbone, attaining a good balance between accuracy and efficiency. Other methods such as [23] solve the problem of data correction and temporal point cloud fusion for object detection using only 4-layer LiDAR. However, the discussed state-of-the-art approaches' performance depends on the large-scale training dataset and ground truth labels.…”
Section: D Object Detectionmentioning
confidence: 99%
“…In the presented pipeline, the modules Object Spatial and Temporal Alignment and LIDAR Motion Correction deal with the spatio-temporal alignment of raw sensor data to a reference timestamp, given by the front fisheye camera. This alignment is performed at object level for the long-range RADAR and trifocal camera objects, as described in Section 3.2., and at point cloud level in the LIDAR motion correction module [46].…”
Section: General System Overviewmentioning
confidence: 99%
“…In the presented pipeline, the modules Object Spatial and Temporal Alignment and LIDAR Motion Correction deal with the spatiotemporal alignment of raw sensor data to a reference timestamp, given by the front fisheye camera. This alignment is performed at object level for the long-range RADAR and trifocal camera objects, as described in Section 3.2., and at point cloud level in the LIDAR motion correction module [46]. The motion-corrected point clouds are projected, using the Points Projection module, onto the intensity image and onto a semantic segmentation image obtained as described in [47] and given by the Semantic Segmentation module, to obtain an enhanced point cloud where each 3D point will contain semantic information as well as color information.…”
Section: General System Overviewmentioning
confidence: 99%
“…It is important to achieve robust and accurate scene understanding for video surveillance and autonomous driving [1]. Camera-based [2,3], LiDAR-based [4], radar-based [5] and sensor fusion-based [6] methods are widely used in scene perception and semantic information recognition of moving targets. Although a visual camera cannot obtain the depth information of targets and the environment, it is unburdened by the cost of the equipment or the complicated sensor fusion strategy.…”
Section: Introductionmentioning
confidence: 99%