2019
DOI: 10.1049/iet-ipr.2018.6696
|View full text |Cite
|
Sign up to set email alerts
|

Combination of modified U‐Net and domain adaptation for road detection

Abstract: Road detection is one of the crucial tasks for scene understanding in autonomous driving. Recently, methods based on deep learning had rapidly grown and addressed this task excellently, because they can extract more abundant features. In this study, the authors consider the visual road detection problem as a classification for each pixel of the given image, which is road or non‐road. There is complex illumination encounter in traffic applications, so that the detection model has poor adaptability. They address… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

0
5
0

Year Published

2021
2021
2022
2022

Publication Types

Select...
6

Relationship

0
6

Authors

Journals

citations
Cited by 6 publications
(5 citation statements)
references
References 45 publications
(55 reference statements)
0
5
0
Order By: Relevance
“…However, for each pixel of the image, which was a path or a lane, authors Dong et al considered the visual road-detection challenge applying a U-Net-prior network with the DAM (Domain Adaptation Model) to reduce the disparity between the training images and the test image [80]. The proposed model was compared to other state-of-art methods such as RBNet [191], StixeNet II and MultiNet [192], where the max-F measures were 94.97%, 94.88% and 94.88%, respectively, in 0.18 s, 1.2 s and 1.7 s. Their methodology obtained 95.57% max F-measurement in 0.15 s faster and more accurately than others, which indicates that their monocular-vision-based systems achieve high precision for a lower running time.…”
Section: Lane Detection and Trackingmentioning
confidence: 99%
“…However, for each pixel of the image, which was a path or a lane, authors Dong et al considered the visual road-detection challenge applying a U-Net-prior network with the DAM (Domain Adaptation Model) to reduce the disparity between the training images and the test image [80]. The proposed model was compared to other state-of-art methods such as RBNet [191], StixeNet II and MultiNet [192], where the max-F measures were 94.97%, 94.88% and 94.88%, respectively, in 0.18 s, 1.2 s and 1.7 s. Their methodology obtained 95.57% max F-measurement in 0.15 s faster and more accurately than others, which indicates that their monocular-vision-based systems achieve high precision for a lower running time.…”
Section: Lane Detection and Trackingmentioning
confidence: 99%
“…The total loss is obtained by the weighted summation of the above branch losses, as shown in formula (5):…”
Section: Loss Functionsmentioning
confidence: 99%
“…With the progress and development of computer vision technology [1,2], as one of its basic tasks, target detection [3] is becoming more and more important in the fields of safety monitoring [4] and autopilot [5]. However, visible-light images require stable imaging conditions to ensure good performance and are greatly affected by light.…”
Section: Introductionmentioning
confidence: 99%
“…Currently, we are focused on treating the monocular camera images [17,18]. In this field, one basic algorithm framework of road detection can be divided into feature extraction and classification.…”
Section: Road Detectionmentioning
confidence: 99%