2022
DOI: 10.3233/jifs-212795
|View full text |Cite|
|
Sign up to set email alerts
|

RETRACTED: Hierarchical RNNs-Based transformers MADDPG for mixed cooperative-competitive environments

Abstract: Structural models based on Attention can not only record the relationships between features’ position, but also can measure the importance of different features based on their weights. By establishing dynamically weighted parameters for choosing relevant and irrelevant features, the key information can be strengthened, and the irrelevant information can be weakened. Therefore, the efficiency of Deep Learning algorithms can be significantly elevated and improved. Although Transformer have been performed very we… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

0
5
0

Year Published

2023
2023
2024
2024

Publication Types

Select...
4
1
1

Relationship

1
5

Authors

Journals

citations
Cited by 6 publications
(6 citation statements)
references
References 14 publications
0
5
0
Order By: Relevance
“…More precisely, a transformer consists of self-attention and a feedforward neural network. In the studies mentioned in references (Iqbal and Sha, 2019 ; Wei et al, 2021 ), many contributions to MADDPG and the attention mechanism were made, which achieved remarkable results.…”
Section: Methodsmentioning
confidence: 99%
See 2 more Smart Citations
“…More precisely, a transformer consists of self-attention and a feedforward neural network. In the studies mentioned in references (Iqbal and Sha, 2019 ; Wei et al, 2021 ), many contributions to MADDPG and the attention mechanism were made, which achieved remarkable results.…”
Section: Methodsmentioning
confidence: 99%
“…Our previous results (Wei et al, 2021 ) proposed a hierarchical transformer MADDPG based on the RNN method, which inherited the idea of the RNN and added transformer at the same time. To verify that the graph convolution with a transformer can indeed express more features of the agent, we compared the reward values of the three sets in the experiments.…”
Section: Methodsmentioning
confidence: 99%
See 1 more Smart Citation
“…Promising frameworks have been proposed by Open AI's MADDPG 13 and even hierarchical RNNs-Based Transformers (the most complex in terms of neural network architecture). 24 The primary reason for doing this is because grid operation has typically concerned one objective (cost), and when multiple objectives are defined, or multi-level optimization is used, it concerns future planning strategy or emission reduction. 25 As mentioned before, even if these complex optimization formulations encompass more objectives, there is a price to pay when solving the problem.…”
Section: Federated Multi-agent Reinforcement Learningmentioning
confidence: 99%
“…The next step is to try the most complex emerging architectures that deal with cooperative/competitive games, such as hierarchical recurrent transformer-based MADDPG (HRTMADDPG). 24 As we adopt these techniques and apply them to grid control problems, we start to see a new kind of multi-layered, multi-valued, coordination depicted in Figure 6. And because of the accelerated sensing and computing, we can explore these boundaries.…”
Section: Machine Learningmentioning
confidence: 99%