2020 IEEE International Conference on Robotics and Automation (ICRA) 2020
DOI: 10.1109/icra40945.2020.9197465
|View full text |Cite
|
Sign up to set email alerts
|

DeepRacer: Autonomous Racing Platform for Experimentation with Sim2Real Reinforcement Learning

Abstract: DeepRacer is a platform for end-to-end experimentation with RL and can be used to systematically investigate the key challenges in developing intelligent control systems. Using the platform, we demonstrate how a 1/18th scale car can learn to drive autonomously using RL with a monocular camera. It is trained in simulation with no additional tuning in physical world and demonstrates: 1) formulation and solution of a robust reinforcement learning algorithm, 2) narrowing the reality gap through joint perception an… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
3
1
1

Citation Types

0
37
0

Year Published

2021
2021
2023
2023

Publication Types

Select...
4
4
1

Relationship

0
9

Authors

Journals

citations
Cited by 67 publications
(46 citation statements)
references
References 60 publications
(90 reference statements)
0
37
0
Order By: Relevance
“…Shi et al [4] presented research that involved training reinforcement learning agents in Duckietown, in a similar way to that presented here; however, the focus was mainly on presenting a method that explained the reasoning behind the trained agents rather than the training methods. Also similar to the present study, Balaji et al [5] presented a method for training a road-following policy in a simulator using reinforcement learning and tested the trained agent in the real world, yet their primary contribution is the DeepRacer platform rather than an in-depth analysis of the road-following policy. Almási et al [7] also used reinforcement learning to solve lane following in the Duckietown environment, but their work differs from the present study in the use of an off-policy reinforcement learning algorithm (deep Q-networks (DQNs) [8]); in this study an on-policy algorithm (proximal policy optimization [9]) is used, which achieves significantly better sample efficiency and shorter training times.…”
Section: Introductionmentioning
confidence: 73%
See 1 more Smart Citation
“…Shi et al [4] presented research that involved training reinforcement learning agents in Duckietown, in a similar way to that presented here; however, the focus was mainly on presenting a method that explained the reasoning behind the trained agents rather than the training methods. Also similar to the present study, Balaji et al [5] presented a method for training a road-following policy in a simulator using reinforcement learning and tested the trained agent in the real world, yet their primary contribution is the DeepRacer platform rather than an in-depth analysis of the road-following policy. Almási et al [7] also used reinforcement learning to solve lane following in the Duckietown environment, but their work differs from the present study in the use of an off-policy reinforcement learning algorithm (deep Q-networks (DQNs) [8]); in this study an on-policy algorithm (proximal policy optimization [9]) is used, which achieves significantly better sample efficiency and shorter training times.…”
Section: Introductionmentioning
confidence: 73%
“…Several methods can be used to constrain and simplify the action space, such as discretisation, clipping some actions or mapping to a lower-dimensional space. Most previous studies [1], [2], [5], [7] have used discrete action spaces, thus the neural network in these policies selected one from a set of hand-crafted actions (steering, throttle combinations), while Kendall et al [3] utilised continuous actions, as has been used in this study.…”
Section: Action Representationsmentioning
confidence: 99%
“…More recently, Amazon has developed a robotics competition called “AWS DeepRacer” [ 14 ]. It includes an 1:18 scale autonomous racing car that is designed to enable thorough evaluation of artificial intelligence navigation models by racing on a real scale track.…”
Section: Related Workmentioning
confidence: 99%
“…Simulation is well-established [1], and small-scale hardware already tests algorithms in lieu of full-scale vehicles [5,6], particularly for algorithms that may be dangerous or costly to test on a full-scale physical platform, such as collision avoidance or high-speed and inclement weather operation. Our approach furthers proven techniques to allow the generation of data from unskilled drivers and through the validation of resulting algorithms on lower cost and more widely accessible hardware than is used today.…”
Section: Introductionmentioning
confidence: 99%