2020
DOI: 10.48550/arxiv.2002.06604
|View full text |Cite
Preprint
|
Sign up to set email alerts
|

Key Points Estimation and Point Instance Segmentation Approach for Lane Detection

Abstract: State-of-the-art lane detection methods achieve successful performance. Despite their advantages, these methods have critical deficiencies such as the limited number of detectable lanes and high false positive. In especial, high false positive can cause wrong and dangerous control. In this paper, we propose a novel lane detection method for the arbitrary number of lanes using the deep learning method, which has the lower number of false positives than other recent lane detection methods. The architecture of th… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1
1

Citation Types

0
25
0

Year Published

2021
2021
2023
2023

Publication Types

Select...
2
2
1

Relationship

0
5

Authors

Journals

citations
Cited by 17 publications
(25 citation statements)
references
References 43 publications
0
25
0
Order By: Relevance
“…Quantitative results. To verify the effectiveness of our proposed method, we compared it with state-of-the-art algorithms based on either segmentation or object detection, including SCNN [12], LaneNet(+H-Net) [10], EL-GAN [4], PointLaneNet [2], FastDraw [14], ENet-SAD [5], ERFNet-E2E [20], SIM-CycleGAN+ERFNet [9], UFNet [16] and PINet [6].…”
Section: Resultsmentioning
confidence: 99%
See 3 more Smart Citations
“…Quantitative results. To verify the effectiveness of our proposed method, we compared it with state-of-the-art algorithms based on either segmentation or object detection, including SCNN [12], LaneNet(+H-Net) [10], EL-GAN [4], PointLaneNet [2], FastDraw [14], ENet-SAD [5], ERFNet-E2E [20], SIM-CycleGAN+ERFNet [9], UFNet [16] and PINet [6].…”
Section: Resultsmentioning
confidence: 99%
“…Accuracy(%) FP FN SCNN [12] 96.53 0.0617 0.0180 LaneNet(+H-Net) [10] 96.40 0.0780 0.0244 EL-GAN [4] 96.39 0.0412 0.0336 PointLaneNet [2] 96.34 0.0467 0.0518 FastDraw [14] 95.2 0.0760 0.0450 ENet-SAD [5] 96.64 0.0602 0.0205 ERFNet-E2E [20] 96.02 0.0321 0.0428 PINet(4H) [6] 96.75 0.0310 0.0250 FOLOLane(ours) 96.92 0.0447 0.0228 Table 3. Performance of different methods on TuSimple testing set.…”
Section: Methodsmentioning
confidence: 99%
See 2 more Smart Citations
“…With the bottom edge of the feature map set to the x-axis after inverse fluoroscopic transformation, the distribution of pixels in the vertical direction of the image for each abscissa on the x-axis is statistically derived using a histogram [26]. At this time, because most feature pixels belong to the tracks, there will be two apparent peaks near the abscissa of the rail lines on the left and right sides [27]. The coordinates of these two peaks are the starting points of sliding window detection.…”
Section: Feature Point Extraction (Fpe)mentioning
confidence: 99%