2023
DOI: 10.1109/tvt.2023.3266940
|View full text |Cite
|
Sign up to set email alerts
|

Vision-Based Autonomous Driving: A Hierarchical Reinforcement Learning Approach

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1

Citation Types

0
2
0

Year Published

2024
2024
2024
2024

Publication Types

Select...
1

Relationship

0
1

Authors

Journals

citations
Cited by 1 publication
(2 citation statements)
references
References 26 publications
0
2
0
Order By: Relevance
“…Additionally, we have compared the performance of the proposed method on a real dataset [31] captured from an autonomous car using a GoPro HERO5 camera. The testing dataset in this paper consists of a subset of images from [31] that contain strong sources of glare i.e all the nighttime images, made publicly available 3 . We have compared the proposed method with a variety of techniques applicable to the glare reduction problem.…”
Section: Resultsmentioning
confidence: 99%
See 1 more Smart Citation
“…Additionally, we have compared the performance of the proposed method on a real dataset [31] captured from an autonomous car using a GoPro HERO5 camera. The testing dataset in this paper consists of a subset of images from [31] that contain strong sources of glare i.e all the nighttime images, made publicly available 3 . We have compared the proposed method with a variety of techniques applicable to the glare reduction problem.…”
Section: Resultsmentioning
confidence: 99%
“…Instead of a carefully managed (usually urban) environment with lots of dedicated infrastructure. Autonomous vehicles (AVs) should be able to operate in uncontrollable environments, including challenging weather, glare, haze, and fog causing illumination variation, poorly marked roads, and unpredictable road users [3]. The perception layer in the AV software stack is responsible for timely perceiving the changes happening in the vehicle's environment, through various computer vision (CV) tasks, such as object detection and recognition, depth estimation, lane detection, and more.…”
mentioning
confidence: 99%