2019
DOI: 10.1109/lra.2019.2926224
|View full text |Cite
|
Sign up to set email alerts
|

NeuroTrajectory: A Neuroevolutionary Approach to Local State Trajectory Learning for Autonomous Vehicles

Abstract: Autonomous vehicles are controlled today either based on sequences of decoupled perception-planning-action operations, either based on End2End or Deep Reinforcement Learning (DRL) systems. Current deep learning solutions for autonomous driving are subject to several limitations (e.g. they estimate driving actions through a direct mapping of sensors to actuators, or require complex reward shaping methods). Although the cost function used for training can aggregate multiple weighted objectives, the gradient desc… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

0
28
0

Year Published

2019
2019
2022
2022

Publication Types

Select...
6
1

Relationship

3
4

Authors

Journals

citations
Cited by 45 publications
(28 citation statements)
references
References 13 publications
0
28
0
Order By: Relevance
“…However, Pendleton et al () do not include a review on deep learning technologies, although the state‐of‐the‐art literature has revealed an increased interest in using deep learning technologies for path planning and behavior arbitration. Following, we discuss two of the most representative deep learning paradigms for path planning, namely IL (Grigorescu, Trasnea, Marina, Vasilcoi, & Cocias, ; Rehder, Quehl, & Stiller, ; Sun, Peng, Zhan, & Tomizuka, ) and DRL‐based planning (Paxton, Raman, Hager, & Kobilarov, ; L. Yu, Shao, Wei, & Zhou, ).…”
Section: Deep Learning For Path Planning and Behavior Arbitrationmentioning
confidence: 99%
See 1 more Smart Citation
“…However, Pendleton et al () do not include a review on deep learning technologies, although the state‐of‐the‐art literature has revealed an increased interest in using deep learning technologies for path planning and behavior arbitration. Following, we discuss two of the most representative deep learning paradigms for path planning, namely IL (Grigorescu, Trasnea, Marina, Vasilcoi, & Cocias, ; Rehder, Quehl, & Stiller, ; Sun, Peng, Zhan, & Tomizuka, ) and DRL‐based planning (Paxton, Raman, Hager, & Kobilarov, ; L. Yu, Shao, Wei, & Zhou, ).…”
Section: Deep Learning For Path Planning and Behavior Arbitrationmentioning
confidence: 99%
“…The goal in IL (Grigorescu et al, ; Rehder et al, ; Sun et al, ) is to learn the behavior of a human driver from recorded driving experiences (Schwarting, Alonso‐Mora, & Rus, ). The strategy implies a vehicle teaching process from human demonstration.…”
Section: Deep Learning For Path Planning and Behavior Arbitrationmentioning
confidence: 99%
“…Their approach, called Neural RRT*, is a framework for generating the sampling distribution of the optimal path under several constraints. In our previous work on local state trajectory estimation [ 4 ], we used a multi-objective neuro-evolutionary approach to train a regression-based hybrid CNN-LSTM architecture using sequences of 2D occupancy grids.…”
Section: Related Workmentioning
confidence: 99%
“…As opposed to Reference [ 4 ], we now sense the world in 3D using an octree representation, and we no longer use convolutional layers for processing the input sequences, as this intermediate representation has been taken over by the fixed state vector between the encoder and the decoder of our architecture.…”
Section: Introductionmentioning
confidence: 99%
“…Although the model could learn local state sequences directly, as in the previous NeuroTrajectory work of Grigorescu et al, 24 we have chosen to learn the vision dynamics model ðc <t> ; w <t> Þ, which can be used both for state prediction in the form of ego-vehicle poses and for tuning the NMPC's quadratic cost function.…”
Section: Learning a Vision Dynamics Modelmentioning
confidence: 99%