2021
DOI: 10.48550/arxiv.2108.11010
|View full text |Cite
Preprint
|
Sign up to set email alerts
|

Adversary agent reinforcement learning for pursuit-evasion

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1

Citation Types

0
1
0
1

Year Published

2023
2023
2023
2023

Publication Types

Select...
1

Relationship

0
1

Authors

Journals

citations
Cited by 1 publication
(2 citation statements)
references
References 21 publications
(36 reference statements)
0
1
0
1
Order By: Relevance
“…∅m , i = ∅ m , i x i , mi . t (20) ∅u , i = ∅ u , i x i , mi . t (21) ĤWJ = H WJ x i , ∂ x ∅J , i x i , mi .…”
Section: Homogeneous Decentralised Actor-critic-mass Algorithmunclassified
See 1 more Smart Citation
“…∅m , i = ∅ m , i x i , mi . t (20) ∅u , i = ∅ u , i x i , mi . t (21) ĤWJ = H WJ x i , ∂ x ∅J , i x i , mi .…”
Section: Homogeneous Decentralised Actor-critic-mass Algorithmunclassified
“…Another type of chase-evasion game involves the pursuit and evasion of players in a de ned setting, such as a grid map. With the advent of Mean Field Games theory, Parsons developed this novel strategy in [20,21,22]. Numerous outcomes and applications of the chase evasion game have been suggested based on these two fundamental works.…”
Section: Introductionmentioning
confidence: 99%