2020
DOI: 10.48550/arxiv.2009.04362
|View full text |Cite
Preprint
|
Sign up to set email alerts
|

Integrated Benchmarking and Design for Reproducible and Accessible Evaluation of Robotic Agents

Abstract: As robotics matures and increases in complexity, it is more necessary than ever that robot autonomy research be reproducible. Compared to other sciences, there are specific challenges to benchmarking autonomy, such as the complexity of the software stacks, the variability of the hardware and the reliance on data-driven techniques, amongst others. In this paper, we describe a new concept for reproducible robotics research that integrates development and benchmarking, so that reproducibility is obtained "by desi… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1

Citation Types

0
1
0

Year Published

2021
2021
2021
2021

Publication Types

Select...
1

Relationship

1
0

Authors

Journals

citations
Cited by 1 publication
(1 citation statement)
references
References 24 publications
0
1
0
Order By: Relevance
“…A challenge server is in place such that the users can easily submit a Docker container to the server, which will evaluate it and returns footage of the runs as well as performance metrics. Duckietown now additionally offers the evaluation of runs on the real robot at the Autolab through the challenge server [43]. The Autolab is an embodied domain monitored by watchtowers.…”
Section: Methodsmentioning
confidence: 99%
“…A challenge server is in place such that the users can easily submit a Docker container to the server, which will evaluate it and returns footage of the runs as well as performance metrics. Duckietown now additionally offers the evaluation of runs on the real robot at the Autolab through the challenge server [43]. The Autolab is an embodied domain monitored by watchtowers.…”
Section: Methodsmentioning
confidence: 99%