2023
DOI: 10.1109/tetci.2022.3209655
|View full text |Cite
|
Sign up to set email alerts
|

Automatic Curriculum Learning for Large-Scale Cooperative Multiagent Systems

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1

Citation Types

0
1
0

Year Published

2024
2024
2024
2024

Publication Types

Select...
3
1

Relationship

0
4

Authors

Journals

citations
Cited by 4 publications
(1 citation statement)
references
References 34 publications
0
1
0
Order By: Relevance
“…Here, we compare with four baselines: (1)the SOTA MARL non-curriculum algorithm HPN-QMIX (Hao et al 2022) (a stronger baseline than our backbone model HPN-VDN) and three MARL curriculum baselines:(2)DYMA (Wang et al 2020), an algorithm that generates task sequences in increments of the number of agents. Two modified versions of PORTAL (3)ERFN-OO and (4)ERFN-MAACL, since the network of OO (da Silva and Costa 2018) and MAACL (Zhang et al 2022) cannot be directly applied in our task scenarios, we only use their task selection criterion and other parts are the same as PORTAL.…”
Section: Comparison With Related Baselines (Rq1)mentioning
confidence: 99%
“…Here, we compare with four baselines: (1)the SOTA MARL non-curriculum algorithm HPN-QMIX (Hao et al 2022) (a stronger baseline than our backbone model HPN-VDN) and three MARL curriculum baselines:(2)DYMA (Wang et al 2020), an algorithm that generates task sequences in increments of the number of agents. Two modified versions of PORTAL (3)ERFN-OO and (4)ERFN-MAACL, since the network of OO (da Silva and Costa 2018) and MAACL (Zhang et al 2022) cannot be directly applied in our task scenarios, we only use their task selection criterion and other parts are the same as PORTAL.…”
Section: Comparison With Related Baselines (Rq1)mentioning
confidence: 99%