2019
DOI: 10.1109/mits.2019.2939139
|View full text |Cite
|
Sign up to set email alerts
|

Transportation Service Redundancy From a Spatio-Temporal Perspective

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1

Citation Types

0
4
0

Year Published

2020
2020
2021
2021

Publication Types

Select...
1
1

Relationship

0
2

Authors

Journals

citations
Cited by 2 publications
(4 citation statements)
references
References 18 publications
0
4
0
Order By: Relevance
“…In the future, the driving scenario is expected to emulate more real-world driving complexities, extended to multiple vehicles in the driving lane, vehicles on the oncoming lane, traffic lights, intersections, and curvatures in the driving trajectory. Our results outperform some data-driven models based on the accuracy, maximum error-free drive-time before deviating off the track and policy losses [ 14 ]. The state-of-the-art developments in DL and RL were the principal motivation behind this work.…”
Section: Introductionmentioning
confidence: 80%
“…In the future, the driving scenario is expected to emulate more real-world driving complexities, extended to multiple vehicles in the driving lane, vehicles on the oncoming lane, traffic lights, intersections, and curvatures in the driving trajectory. Our results outperform some data-driven models based on the accuracy, maximum error-free drive-time before deviating off the track and policy losses [ 14 ]. The state-of-the-art developments in DL and RL were the principal motivation behind this work.…”
Section: Introductionmentioning
confidence: 80%
“…POMDPs formulate the autonomous vehicle control problem as an optimization task, and rely on assumptions to optimize an objective [42]. The RL seems to be promising for planning and control aspects and scales to very complex environments and unexpected scenarios [44].…”
Section: Empirical Decision-making System For Autonomous Vehiclesmentioning
confidence: 99%
“…When a regularization factor is applied, the DL model learns a transformation while still adjusting the reward function and policy loss due to correlations in time [58]. Previous approaches available in literature have learned the function F directly, in case of processing artificial videos [44]. The function F is learned in a piecewise manner so that the efficiency and performance can be improved separately [78].…”
Section: Scenario Setup In Donkey Simulatormentioning
confidence: 99%
See 1 more Smart Citation