2019
DOI: 10.1109/twc.2019.2914040
|View full text |Cite
|
Sign up to set email alerts
|

An Efficient Stochastic Gradient Descent Algorithm to Maximize the Coverage of Cellular Networks

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

0
13
0

Year Published

2020
2020
2023
2023

Publication Types

Select...
7
1

Relationship

0
8

Authors

Journals

citations
Cited by 56 publications
(22 citation statements)
references
References 31 publications
0
13
0
Order By: Relevance
“…The Adam algorithm [39] is different from traditional stochastic gradient descent [40], [41]. In stochastic gradient descent, a single learning rate is used to update all weights, and the learning rate does not change during the training process.…”
Section: Overall Optimization Of the Rbfnnmentioning
confidence: 99%
“…The Adam algorithm [39] is different from traditional stochastic gradient descent [40], [41]. In stochastic gradient descent, a single learning rate is used to update all weights, and the learning rate does not change during the training process.…”
Section: Overall Optimization Of the Rbfnnmentioning
confidence: 99%
“…Mean accuracy MIoU FCN with Adam (10) 0.9775 0.9660 0.9403 FCN with AdaGrad (9) 0.9768 0.9654 0.9386 FCN with SGD (7) 0.9782 0.9685 0.9423 DeepLabv2 (5) 0.9713 0.9663 0.9389 Proposed method 0.9824 0.9816 0.9577 voting method, the fuzzy integral is used for the fusion of multiple FCNs with various optimal methods. Experimental results show that FCN-8s has a better performance than FCN-16s and FCN-32s.…”
Section: Accuracy Overall Accuracymentioning
confidence: 99%
“…For a deep learning network, there are some optimal methods for improving the network's performance. The most common method is stochastic gradient descent (SGD), (7) which randomly selects a certain number of training samples in each time for training. This method can usually learn training samples effectively, but it depends on the learning rate setting and the training time is long.…”
Section: Introductionmentioning
confidence: 99%
“…to efficiently explore the solution space in the search for near-optimal solutions. Alternatively, some methods use local search algorithms (e.g., coordinate descent [33], Nelder-Mead [34], gradient descent [14], [35], simulated annealing [36], case-based learning [10], Tabu search [37], primal-dual [38]. .…”
Section: Related Workmentioning
confidence: 99%