2018
DOI: 10.1109/lcomm.2018.2825444
|View full text |Cite
|
Sign up to set email alerts
|

Deep Power Control: Transmit Power Control Scheme Based on Convolutional Neural Network

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

0
241
0
2

Year Published

2019
2019
2022
2022

Publication Types

Select...
4
3
2

Relationship

0
9

Authors

Journals

citations
Cited by 282 publications
(258 citation statements)
references
References 7 publications
0
241
0
2
Order By: Relevance
“…We generate 20000 training samples, i.e., network realizations, to train MLP, PCNet, and DPC as in [1], [2] while the number of training samples used for IGCNet is 2000. The test dataset contains 500 network realizations.…”
Section: ) Mlp [1]: It Leverages Mlp To Learn the Input-output Mappimentioning
confidence: 99%
See 1 more Smart Citation
“…We generate 20000 training samples, i.e., network realizations, to train MLP, PCNet, and DPC as in [1], [2] while the number of training samples used for IGCNet is 2000. The test dataset contains 500 network realizations.…”
Section: ) Mlp [1]: It Leverages Mlp To Learn the Input-output Mappimentioning
confidence: 99%
“…We follow [4] to set up the simulation. The link distance is uniformly distributed in [2,10] meters during training. In the test, the link distance is uniformly distributed in [l r , u r ] meters, where l r is uniform in [2,20] meters and u r is uniform in [l r , 20] meters.…”
Section: ) Varying User Locationsmentioning
confidence: 99%
“…Scalability is attained in the processing of signals in time and space with convolutional neural networks (CNNs). Recognizing this fact has led to proposals that adapt CNNs to wireless resource allocation problems [13], [14], [17]. A particularly enticing alternative is the use of a spatial CNN that exploits the spatial geometry of wireless networks to attain scalability to large scale systems with hundreds of nodes [21].…”
Section: Introductionmentioning
confidence: 99%
“…Such hybrid learning is a promising technique to achieve both great performance and accelerate the convergence rate of unsupervised learning. In the first stage, supervised learning is used for pre-training and then unsupervised learning will be used for further improvement in the second stage [10]. Getting the best of both learning methods, hybrid learning is an attractive approach known to achieve performance that is better than most existing heuristics.…”
Section: Hybrid Learning For Bnnmentioning
confidence: 99%