2022
DOI: 10.48550/arxiv.2212.06817
|View full text |Cite
Preprint
|
Sign up to set email alerts
|

RT-1: Robotics Transformer for Real-World Control at Scale

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1
1

Citation Types

0
41
0

Year Published

2023
2023
2023
2023

Publication Types

Select...
4
3
1
1

Relationship

0
9

Authors

Journals

citations
Cited by 37 publications
(70 citation statements)
references
References 0 publications
0
41
0
Order By: Relevance
“…Expanding the RL-controller's policy to include a diverse set of tasks would also be desirable if we intend to control multiple sub-systems. DeepMind's Gato [54] and Robotics Transformer (RT-1) from Google Brain [55] have recently demonstrated the most promising strides towards artificial general intelligence, enabling multitask learning using a context-based generalized policy. Applicability of such frameworks that combine transformer models with reinforcement learning strategies indeed looks promising for current and future generation GW observatories, and our work is the first step in that direction.…”
Section: Discussionmentioning
confidence: 99%
“…Expanding the RL-controller's policy to include a diverse set of tasks would also be desirable if we intend to control multiple sub-systems. DeepMind's Gato [54] and Robotics Transformer (RT-1) from Google Brain [55] have recently demonstrated the most promising strides towards artificial general intelligence, enabling multitask learning using a context-based generalized policy. Applicability of such frameworks that combine transformer models with reinforcement learning strategies indeed looks promising for current and future generation GW observatories, and our work is the first step in that direction.…”
Section: Discussionmentioning
confidence: 99%
“…With proper correction and prompting, the Transformer can generate valid actions in the embodied environment. Furthermore, similarly to Gato, RT-1 [Brohan et al, 2022] leverages largescale datasets with diverse robotics experiences and language instructions to train a Transformer as well as a tokenizer, which achieves high performance on downstream tasks.…”
Section: Generalize To Multiple Domainsmentioning
confidence: 99%
“…IL methods can be roughly divided into two types: behavior cloning (BC) and inverse reinforcement learning (IRL). Although behavior cloning may seem simple, it is widely used in practice, even for challenging tasks like real-world robot manipulation [14], [15]. IRL is a bit more complex because it first estimates a reward function from the demonstration data and then solves a forward RL problem using this reward function.…”
Section: A Learning Policymentioning
confidence: 99%
“…This can be particularly useful in complex or dynamic environments, where trial-anderror learning may be impractical or infeasible. For example, learning from demonstrations has been used to train agents to perform tasks such as robot manipulation [14], [15], where the expert knowledge of a human operator can be used to guide the learning process. Another advantage of leveraging demonstrations is that it can reduce the amount of data and computational resources required for training.…”
Section: Introductionmentioning
confidence: 99%