2018 International Joint Conference on Neural Networks (IJCNN) 2018
DOI: 10.1109/ijcnn.2018.8489363
|View full text |Cite
|
Sign up to set email alerts
|

Mapping Road Lanes Using Laser Remission and Deep Neural Networks

Abstract: We propose the use of deep neural networks (DNN) for solving the problem of inferring the position and relevant properties of lanes of urban roads with poor or absent horizontal signalization, in order to allow the operation of autonomous cars in such situations. We take a segmentation approach to the problem and use the Efficient Neural Network (ENet) DNN for segmenting LiDAR remission grid maps into road maps. We represent road maps using what we called road grid maps. Road grid maps are square matrixes and … Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

0
5
0

Year Published

2018
2018
2023
2023

Publication Types

Select...
4
3

Relationship

1
6

Authors

Journals

citations
Cited by 7 publications
(5 citation statements)
references
References 35 publications
0
5
0
Order By: Relevance
“…Examples of the above can be found in [22], in which a comparison is made between a U-Net and a fully convolutional network (FCN) for the road segmentation task in 2D images of aerial views, where the metrics of each neural network with different image sizes stand out. In the same way, the efficient neural network (ENet) can be used to infer the position and properties of the lines of a highway by segmenting roads in 2D images of aerial maps and generating a grid map that identifies the lateral and central lines of the road, seeking to support autonomous vehicle management with the generation of this information [23]. Another example of this is found in [24], where researchers used 3D scenes acquired with radar to generate 2D top-view maps that they used as input for the fully convolutional neural network, SegNet, and U-Net networks, in an effort to segment the streets, cars, edges, and fences as the output of the network in the 2D top-view plan form.…”
Section: Related Work 21 Semantic Segmentation Top-view Work Related ...mentioning
confidence: 99%
See 1 more Smart Citation
“…Examples of the above can be found in [22], in which a comparison is made between a U-Net and a fully convolutional network (FCN) for the road segmentation task in 2D images of aerial views, where the metrics of each neural network with different image sizes stand out. In the same way, the efficient neural network (ENet) can be used to infer the position and properties of the lines of a highway by segmenting roads in 2D images of aerial maps and generating a grid map that identifies the lateral and central lines of the road, seeking to support autonomous vehicle management with the generation of this information [23]. Another example of this is found in [24], where researchers used 3D scenes acquired with radar to generate 2D top-view maps that they used as input for the fully convolutional neural network, SegNet, and U-Net networks, in an effort to segment the streets, cars, edges, and fences as the output of the network in the 2D top-view plan form.…”
Section: Related Work 21 Semantic Segmentation Top-view Work Related ...mentioning
confidence: 99%
“…In the state-of-the-art databases for autonomous driving that have scenes generated from point clouds [22][23][24][25], the data therein are three-dimensional. Therefore, the detection labels have three location values.…”
Section: Introductionmentioning
confidence: 99%
“…The RGB-D sequences used two different RGB-D sensing cameras: Microsoft Kinect (sequence1-Kinect) and Orbbec Astra (sequence2-Astra and sequence3-Astra). Further details of time duration, data statistics and information parsers is given in the project page 3 .…”
Section: Dataset and Object Training Samplesmentioning
confidence: 99%
“…In this context, embedding a higher level of scene understanding to identify particular objects of interest (including people), as well as to localize them, would greatly benefit intelligent agents to perform effective visual navigation, perception and manipulation tasks. Notably, this is a desired capability in human-robot interaction or in autonomous robot navigation tasks in daily-life scenes since it can provide "situation awareness" by distinguishing dynamic entities (e.g., humans, vehicles) from static ones (e.g., door, bench) [2,3], or to recognize unsafe situations. This competence is also instrumental in the development of personal assistant robots, which need to deal with different objects of interest for guiding visually impaired people to cross a door, to find a bench, or a water fountain.…”
Section: Introductionmentioning
confidence: 99%
“…However, a grid representation might require a wasteful use of memory space and processing time, since usually most of the environment where self-driving cars operate is not composed of roads, but buildings, free space, etc. Carneiro et al (2018) proposed a metric road map (Figure 5), a grid map, where each 0.2 m × 0.2 m-cell contains a code that, when nonzero, indicates that the cell belongs to a lane. Codes ranging from 1 to 16 represent relative distances from a cell to the center of the lane, or the type of the different possible lane markings (broken, solid, or none) present in the road.…”
Section: Metric Representationsmentioning
confidence: 99%