2023
DOI: 10.1109/tvt.2022.3213243
|View full text |Cite
|
Sign up to set email alerts
|

When DSA Meets SWIPT: A Joint Power Allocation and Time Splitting Scheme Based on Multi-Agent Deep Reinforcement Learning

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...

Citation Types

0
0
0

Year Published

2023
2023
2024
2024

Publication Types

Select...
2
1

Relationship

0
3

Authors

Journals

citations
Cited by 3 publications
(1 citation statement)
references
References 17 publications
0
0
0
Order By: Relevance
“…APPENDIX A NON-DOMINANCE PROOF OF(42) Firstly, we assume that the four-tuple found by maximal r t 1 (χ * ) can be dominated by other four-tuples in the space of D 1 . By Definition 1 of Pareto optimal solutions in[52], there must be another solution of⟨ o j 1 ,a j i ,r j i ,o ′ j associated with r j d−s ≥ r * d−s , s = 1, 2, 3.In this way, the index χ j s by r j d−s must be higher than the index χ * s by r * d−s , and then the sum of the three agents' indices χ j must be higher than the sum of the three agents' indices χ * .However, we know that χ * is the maximum sum of indices. That is, there exists the contradiction between χ * and χ j , which indicates the assumption is false.…”
mentioning
confidence: 99%
“…APPENDIX A NON-DOMINANCE PROOF OF(42) Firstly, we assume that the four-tuple found by maximal r t 1 (χ * ) can be dominated by other four-tuples in the space of D 1 . By Definition 1 of Pareto optimal solutions in[52], there must be another solution of⟨ o j 1 ,a j i ,r j i ,o ′ j associated with r j d−s ≥ r * d−s , s = 1, 2, 3.In this way, the index χ j s by r j d−s must be higher than the index χ * s by r * d−s , and then the sum of the three agents' indices χ j must be higher than the sum of the three agents' indices χ * .However, we know that χ * is the maximum sum of indices. That is, there exists the contradiction between χ * and χ j , which indicates the assumption is false.…”
mentioning
confidence: 99%