2020
DOI: 10.3390/rs12203274
|View full text |Cite
|
Sign up to set email alerts
|

Deep Dual-Modal Traffic Objects Instance Segmentation Method Using Camera and LIDAR Data for Autonomous Driving

Abstract: Recent advancements in environmental perception for autonomous vehicles have been driven by deep learning-based approaches. However, effective traffic target detection in complex environments remains a challenging task. This paper presents a novel dual-modal instance segmentation deep neural network (DM-ISDNN) by merging camera and LIDAR data, which can be used to deal with the problem of target detection in complex environments efficiently based on multi-sensor data fusion. Due to the sparseness of the LIDAR … Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1
1

Citation Types

0
6
0

Year Published

2021
2021
2022
2022

Publication Types

Select...
7
1

Relationship

0
8

Authors

Journals

citations
Cited by 19 publications
(6 citation statements)
references
References 29 publications
(29 reference statements)
0
6
0
Order By: Relevance
“…However, the lack of variety in terms of weather and light conditions limits its potential. Therefore, the Dual-Modal Dataset [22], which includes paired LiDAR and RGB image data from the Kitti dataset, was released for traffic-object instance segmentation.…”
Section: Public Semantic Datasetmentioning
confidence: 99%
“…However, the lack of variety in terms of weather and light conditions limits its potential. Therefore, the Dual-Modal Dataset [22], which includes paired LiDAR and RGB image data from the Kitti dataset, was released for traffic-object instance segmentation.…”
Section: Public Semantic Datasetmentioning
confidence: 99%
“…However, also for (semi-)autonomous systems, visual information can be considered for GPS-independent navigation and positioning purposes. While there are quite many (annotated) data sets and annotation tools suitable for, among others, machine learning approaches in the area of autonomous vehicles on land ([ 63 , 64 ], for example), there are relatively few starting points to work with image classification approaches in maritime environments. To mitigate that, we investigated the possibility to create a semi-automated tool for image annotation based on image sets gathered in the previously described trial and data collection runs in WARA-PS [ 65 ].…”
Section: Collaborative Autonomous Ships/marine Vessels (Usv) Experimentationmentioning
confidence: 99%
“…In this condition, a large clustering radi preferred to avoid over-segmentation. To show this problem more intuitively, the traditional DSC was tested on the K dataset [34], with the clustering radius and the minimum number of points (denote To show this problem more intuitively, the traditional DSC was tested on the KITTI dataset [34], with the clustering radius and the minimum number of points (denoted by MinPts) set to 0.15 m and 5, respectively. Some typical wrong segmentations are shown in Figure 4.…”
Section: Problem Analysismentioning
confidence: 99%
“…Specifically, the inherent attributes of the point cloud, h and v, are calculated by the position of the sample point; the mesh width in the grid map, w, has been given in the pre-processing stage; the upper boundary of the semi-major, βL, depends on the maximum length of the detected object according to the design concept from Section 3.2; and the linear coefficients α and β need to be studied. Thus, the theoretical relation between MinPts, α and β is studied, and numerical optimization of parameters is analyzed in the KITTI dataset [34] by considering the comprehensive performance.…”
Section: Parameter Designmentioning
confidence: 99%