2015 IEEE International Conference on Computer Vision (ICCV) 2015
DOI: 10.1109/iccv.2015.312
|View full text |Cite
|
Sign up to set email alerts
|

DeepDriving: Learning Affordance for Direct Perception in Autonomous Driving

Abstract: Today, there are two major paradigms for vision-based autonomous driving systems: mediated perception approaches that parse an entire scene to make a driving decision, and behavior reflex approaches that directly map an input image to a driving action by a regressor. In this paper, we propose a third paradigm: a direct perception approach to estimate the affordance for driving. We propose to map an input image to a small number of key perception indicators that directly relate to the affordance of a road/traff… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

0
931
0
2

Year Published

2017
2017
2023
2023

Publication Types

Select...
4
3

Relationship

0
7

Authors

Journals

citations
Cited by 1,449 publications
(936 citation statements)
references
References 19 publications
(28 reference statements)
0
931
0
2
Order By: Relevance
“…Barnes et al [3] and Oliveira et al [24] provide more recent examples of deep learning used to segment image pixels into drivable routes, which can then be used for planning. While we focus on supervised learning of collision probability, there are many other ways to map between images and actions, including end-to-end neural networks [6,19], affordance-based representations [7], reinforcement learning using either simulated or real images [23,31], and classification of paths from manually collected data [11]. While it would be impossible to survey the vast literature on visual navigation here, we note that these examples and many others do not explicitly address uncertainty about the learned model.…”
Section: Related Workmentioning
confidence: 99%
“…Barnes et al [3] and Oliveira et al [24] provide more recent examples of deep learning used to segment image pixels into drivable routes, which can then be used for planning. While we focus on supervised learning of collision probability, there are many other ways to map between images and actions, including end-to-end neural networks [6,19], affordance-based representations [7], reinforcement learning using either simulated or real images [23,31], and classification of paths from manually collected data [11]. While it would be impossible to survey the vast literature on visual navigation here, we note that these examples and many others do not explicitly address uncertainty about the learned model.…”
Section: Related Workmentioning
confidence: 99%
“…In 2015, the ImageNet winning entry, ResNet [15], exceeded human-level accuracy with a top-5 error rate 4 below 5%. Since then, the error rate has dropped below 3% and more focus is now being placed on more challenging components of the competition, such as object detection and localization.…”
Section: Dnn Timelinementioning
confidence: 99%
“…• Robotics DNNs have been successful in the domain of robotic tasks such as grasping with a robotic arm [32], motion planning for ground robots [33], visual navigation [4,34], control to stabilize a quadcopter [35] and driving strategies for autonomous vehicles [36]. DNNs are already widely used in multimedia applications today (e.g., computer vision, speech recognition).…”
Section: E Applications Of Dnnmentioning
confidence: 99%
See 2 more Smart Citations