2021
DOI: 10.3390/electronics10243113
|View full text |Cite
|
Sign up to set email alerts
|

Lane following Learning Based on Semantic Segmentation with Chroma Key and Image Superposition

Abstract: There are various techniques to approach learning in autonomous driving; however, all of them suffer from some problems. In the case of imitation learning based on artificial neural networks, the system must learn to correctly identify the elements of the environment. In some cases, it takes a lot of effort to tag the images with the proper semantics. This is also relevant given the need to have very varied scenarios to train and to thus obtain an acceptable generalization capacity. In the present work, we pro… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1

Citation Types

0
0
0

Year Published

2023
2023
2024
2024

Publication Types

Select...
2

Relationship

0
2

Authors

Journals

citations
Cited by 2 publications
(1 citation statement)
references
References 35 publications
(55 reference statements)
0
0
0
Order By: Relevance
“…Corrochano et al proposed a method for semantic labeling automatically and tested it using a small-scale car model that learns to drive on a reduced circuit [11]. Zhang et al focuses on classification and localization issues in lane detection using semantic segmentation by proposing a Global Convolutional Network (GCN) model that achieved 57.5875 mean square error [12].…”
Section: International Journal On Recent and Innovation Trends In Com...mentioning
confidence: 99%
“…Corrochano et al proposed a method for semantic labeling automatically and tested it using a small-scale car model that learns to drive on a reduced circuit [11]. Zhang et al focuses on classification and localization issues in lane detection using semantic segmentation by proposing a Global Convolutional Network (GCN) model that achieved 57.5875 mean square error [12].…”
Section: International Journal On Recent and Innovation Trends In Com...mentioning
confidence: 99%