2021
DOI: 10.48550/arxiv.2102.07037
|View full text |Cite
Preprint
|
Sign up to set email alerts
|

Robust Lane Detection via Expanded Self Attention

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
3
1
1

Citation Types

0
6
0

Year Published

2022
2022
2023
2023

Publication Types

Select...
3
1

Relationship

0
4

Authors

Journals

citations
Cited by 4 publications
(6 citation statements)
references
References 0 publications
0
6
0
Order By: Relevance
“…Rich contextual information for further representation is encoded from a reasonable level of attention map. Expanded self attention (ESA) [24] designed for segmentation based lane detection in occluded and low-light images. The ESA module predicts the confidence of the lane by extracting global contextual information.…”
Section: Lane Line Segmentationmentioning
confidence: 99%
“…Rich contextual information for further representation is encoded from a reasonable level of attention map. Expanded self attention (ESA) [24] designed for segmentation based lane detection in occluded and low-light images. The ESA module predicts the confidence of the lane by extracting global contextual information.…”
Section: Lane Line Segmentationmentioning
confidence: 99%
“…Attention-based methods. Several attention-based lane detection models (Lee et al 2021;Tabelini et al 2020b;Liu et al 2021) have been proposed to capture the long-range information. (Lee et al 2021) propose a self-attention mechanism to predict the lanes' confidence along with the vertical and horizontal directions in an image.…”
Section: Related Workmentioning
confidence: 99%
“…The most state-of-the-art lane detection methods (Lee et al 2021;Xu et al 2020;Chen, Liu, and Lian 2019) Figure 1: Sketch map of proposed detection attention and row-column attention in Laneformer. Given the detected person and vehicle instances, detection attention is performed to capture the implicit relationship between them and lanes, e.g., lanes are more likely to appear next to cars.…”
Section: Introductionmentioning
confidence: 99%
See 1 more Smart Citation
“…In addition, the fixed receptive field of CNN architecture limits the ability to incorporate relations for longrange lane points, making them hard to capture well the characteristics of lanes since the shapes are conceptually long and thin. Several attention-based lane detection models (Lee et al 2021;Tabelini et al 2020b;Liu et al 2021) have been also proposed to capture the long-range information. Nevertheless, the fixed attention routines can not adap-tively fit the shape characteristic of lanes.…”
Section: Introductionmentioning
confidence: 99%