2019
DOI: 10.1609/aaai.v33i01.33018433
|View full text |Cite
|
Sign up to set email alerts
|

Learning to Steer by Mimicking Features from Heterogeneous Auxiliary Networks

Abstract: The training of many existing end-to-end steering angle prediction models heavily relies on steering angles as the supervisory signal. Without learning from much richer contexts, these methods are susceptible to the presence of sharp road curves, challenging traffic conditions, strong shadows, and severe lighting changes. In this paper, we considerably improve the accuracy and robustness of predictions through heterogeneous auxiliary networks feature mimicking, a new and effective training method that provides… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
2
1

Citation Types

0
48
0

Year Published

2020
2020
2023
2023

Publication Types

Select...
5
1

Relationship

0
6

Authors

Journals

citations
Cited by 50 publications
(48 citation statements)
references
References 20 publications
(53 reference statements)
0
48
0
Order By: Relevance
“…Building around the optical flow feature, we propose a deep learning system to generate the control signals. Because the flow can encode the temporal information between two frames, our system does not need a recurrent module [9,10,12], although we can include such modules for considering a longer temporal duration. The frame-based nature of our system allows the system to react quickly to unexpected events.…”
Section: Figmentioning
confidence: 99%
See 4 more Smart Citations
“…Building around the optical flow feature, we propose a deep learning system to generate the control signals. Because the flow can encode the temporal information between two frames, our system does not need a recurrent module [9,10,12], although we can include such modules for considering a longer temporal duration. The frame-based nature of our system allows the system to react quickly to unexpected events.…”
Section: Figmentioning
confidence: 99%
“…They separated the flow into the image velocity component and the object motion using the camera motion estimated from a deep network [31]. Hou et al [10] proposed to utilize two pre-trained auxiliary networks, one for image segmentation and one for optical flow prediction, to guide the main encoder network to generate low-dimensional deep feature, which is then inputted to an LSTM module for predicting the control signals. We believe that the use of the LSTM module to model temporal information may not be necessary, as the optical flow features already define the movement feature across time effectively.…”
Section: Optical Flowmentioning
confidence: 99%
See 3 more Smart Citations