2017
DOI: 10.3390/s17112475
|View full text |Cite
|
Sign up to set email alerts
|

Road Lane Detection Robust to Shadows Based on a Fuzzy System Using a Visible Light Camera Sensor

Abstract: Recently, autonomous vehicles, particularly self-driving cars, have received significant attention owing to rapid advancements in sensor and computation technologies. In addition to traffic sign recognition, road lane detection is one of the most important factors used in lane departure warning systems and autonomous vehicles for maintaining the safety of semi-autonomous and fully autonomous systems. Unlike traffic signs, road lanes are easily damaged by both internal and external factors such as road quality,… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
4
1

Citation Types

0
29
0
1

Year Published

2018
2018
2024
2024

Publication Types

Select...
7
1

Relationship

0
8

Authors

Journals

citations
Cited by 35 publications
(30 citation statements)
references
References 33 publications
(74 reference statements)
0
29
0
1
Order By: Relevance
“…Since the method addresses the detection of shadow edges on the road, in order to simplify a captured road scene, as well as reduce the number of false positive detections outside the road surface, an ROI in the incoming color images is defined on the road by using knowledge of the scene perspective and assuming flat road surface as in References [4,59]. The camera is installed beside the rear-view mirror of the ego-vehicle, and the ROI is a rectangular area covering the road region ahead, excluding most of the image areas which do not correspond with the ground (see Figure 8).…”
Section: Shadow Edge Detection Methodsmentioning
confidence: 99%
See 2 more Smart Citations
“…Since the method addresses the detection of shadow edges on the road, in order to simplify a captured road scene, as well as reduce the number of false positive detections outside the road surface, an ROI in the incoming color images is defined on the road by using knowledge of the scene perspective and assuming flat road surface as in References [4,59]. The camera is installed beside the rear-view mirror of the ego-vehicle, and the ROI is a rectangular area covering the road region ahead, excluding most of the image areas which do not correspond with the ground (see Figure 8).…”
Section: Shadow Edge Detection Methodsmentioning
confidence: 99%
“…There are several factors that make onboard systems based on computer vision challenging. Changing scenarios, cluttered backgrounds, variable illumination, and the presence of objects of different class in the scene contribute to making the design of driver assistance tasks such as the detection of roads [1,2] and lanes [3,4] difficult. One of the most challenging factors encountered by a vision-based ADAS system is cast shadows [1,5] (see Figure 1).…”
Section: Introductionmentioning
confidence: 99%
See 1 more Smart Citation
“…Currently, the most widely used sensor for lane detection is a camera. Lane detection technology using cameras has been mainly studied to increase its recognition rate in complex environments [1][2][3][4] and to reduce the complexity for real-time lane recognition [5][6][7][8]. However, when cameras are affected by factors, such as lighting conditions, fog, and obstacles, the lane recognition rate is degraded.…”
Section: Introductionmentioning
confidence: 99%
“…However, unexpected challenges always appear in lane marking detection and localization due to various interferences such as illumination conditions (occlusion, night time…), camera location and orientation, environmental factors (i.e., foggy days, cloudy and rainy days…), the appearance of the lane markings, the type of road, and so on [ 2 ]. To deal with the abovementioned problems, numerous vision-based lane marking detection and localization algorithms have been proposed, which for structured roads can be roughly grouped into two categories: feature-based methods and model-based techniques [ 6 , 15 , 16 , 17 , 18 ].…”
Section: Introductionmentioning
confidence: 99%