2020 IEEE Intelligent Vehicles Symposium (IV) 2020
DOI: 10.1109/iv47402.2020.9304613
|View full text |Cite
|
Sign up to set email alerts
|

Lane Detection in Low-light Conditions Using an Efficient Data Enhancement: Light Conditions Style Transfer

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

0
27
0

Year Published

2021
2021
2024
2024

Publication Types

Select...
8
1

Relationship

0
9

Authors

Journals

citations
Cited by 68 publications
(29 citation statements)
references
References 17 publications
0
27
0
Order By: Relevance
“…To verify the effects of our model, we undertook a broad comparison with several state-of-the-art methods. We evaluated Nb_SINet and multiple backbones, i.e., ENet_LGAD [ 15 ], SIM_CycleGAN+ERFNet [ 16 ], ERFNet-E2E [ 17 ], ERFNet_VP [ 18 ] and ERFNet-HESA [ 19 ], for each scenario, and the mean F1 for each method is also shown in Table 5 .…”
Section: Methodsmentioning
confidence: 99%
See 1 more Smart Citation
“…To verify the effects of our model, we undertook a broad comparison with several state-of-the-art methods. We evaluated Nb_SINet and multiple backbones, i.e., ENet_LGAD [ 15 ], SIM_CycleGAN+ERFNet [ 16 ], ERFNet-E2E [ 17 ], ERFNet_VP [ 18 ] and ERFNet-HESA [ 19 ], for each scenario, and the mean F1 for each method is also shown in Table 5 .…”
Section: Methodsmentioning
confidence: 99%
“…However, substantial computational resources are required to train the teacher network. Liu et al [ 16 ] presented style transformation for data augmentation to generate images in low-light conditions with generative adversarial networks that improve the environmental adaptability of the lane detector, which does not demand any additional manual annotation or inference overhead. Yun et al [ 17 ] used the horizontal reduction module to compactly extract the lane marker information in the image and achieved end-to-end lane marker detection via row-wise classification.…”
Section: Related Workmentioning
confidence: 99%
“…In crowded, dazzle, no line, cross, and night scenarios, FANet outperforms the other methods. Compared with SIM-CycleGAN [ 39 ], although it is specifically designed for different scenarios, FANet is also close to it in many metrics, or even better. Compared with the knowledge distillation method IntRA-KD [ 14 ] and the network search method CurveLanes-NAS [ 40 ], FANet has a higher F1 score of 3.09% and 4.09% F1 respectively.…”
Section: Methodsmentioning
confidence: 99%
“…Content may change prior to final publication. FIGURE 2: Preprocessing a marked road of smoothing filters, image enhancement has been done in some researches in order to retain contour details [28]- [30].…”
Section: ) Image Smoothing Sharpening and Shadow Removalmentioning
confidence: 99%