2021
DOI: 10.48550/arxiv.2105.12196
|View full text |Cite
Preprint
|
Sign up to set email alerts
|

From Motor Control to Team Play in Simulated Humanoid Football

Abstract: Intelligent behaviour in the physical world exhibits structure at multiple spatial and temporal scales. Although movements are ultimately executed at the level of instantaneous muscle tensions or joint torques, they must be selected so as to serve goals defined on much longer timescales, and in terms of relations that extend far beyond the body itself, ultimately involving coordination with other agents. Recent research in artificial intelligence has shown the promise of learning-based approaches to the respec… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
3
1
1

Citation Types

0
12
0

Year Published

2021
2021
2023
2023

Publication Types

Select...
4
3
1

Relationship

2
6

Authors

Journals

citations
Cited by 10 publications
(12 citation statements)
references
References 70 publications
(116 reference statements)
0
12
0
Order By: Relevance
“…Many researchers have developed agents for other football games (e.g., RoboCup Soccer Simulator (Kitano et al, 1997) and the DeepMind MuJoCo Multi-Agent Soccer Environment (Liu et al, 2019(Liu et al, , 2021a). Different from Google Research Football, these environments focus more on low-level control of a physics simulation of robots, while GRF focuses on high-level actions.…”
Section: Football Games and Football Game Aismentioning
confidence: 99%
“…Many researchers have developed agents for other football games (e.g., RoboCup Soccer Simulator (Kitano et al, 1997) and the DeepMind MuJoCo Multi-Agent Soccer Environment (Liu et al, 2019(Liu et al, , 2021a). Different from Google Research Football, these environments focus more on low-level control of a physics simulation of robots, while GRF focuses on high-level actions.…”
Section: Football Games and Football Game Aismentioning
confidence: 99%
“…In particular it will be interesting to explore tasks that include more dynamic movements and obstacle traversal, or advanced object interaction [8,9]. Hierarchical controllers like in this work could also be particularly suitable for enabling more efficient learning of perceptive policies as has already been shown in simulation [8].…”
Section: Future Workmentioning
confidence: 99%
“…These methods can amortize the cost of complex optimization strategies at training time into neural-network (and other) controllers that are cheap to evaluate during deployment, and they can find flexible solutions that generalize to novel situations. A number of studies in simulation have demonstrated how Reinforcement Learning (RL) techniques can be deployed to generate complex movement strategies including perception-action coupling and object interaction [6,7,8,9]. Since learning directly on the hardware raises concerns related to data efficiency and safety (e.g.…”
Section: Introductionmentioning
confidence: 99%
See 1 more Smart Citation
“…People are intrigued by creating collective artificial intelligence. Deep multi-agent reinforcement learning (MARL) is one of the inspiring achievements in cooperative agents [1]- [5] in recent years. However, learning to coordinate is still a daunting problem in MARL.…”
Section: Introductionmentioning
confidence: 99%